SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. Allmore » phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.« less
NASA Technical Reports Server (NTRS)
Pinckney, John
2010-01-01
With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.
Surface entropy of liquids via a direct Monte Carlo approach - Application to liquid Si
NASA Technical Reports Server (NTRS)
Wang, Z. Q.; Stroud, D.
1990-01-01
Two methods are presented for a direct Monte Carlo evaluation of the surface entropy S(s) of a liquid interacting by specified, volume-independent potentials. The first method is based on an application of the approach of Ferrenberg and Swendsen (1988, 1989) to Monte Carlo simulations at two different temperatures; it gives much more reliable results for S(s) in liquid Si than previous calculations based on numerical differentiation. The second method expresses the surface entropy directly as a canonical average at fixed temperature.
Radial-based tail methods for Monte Carlo simulations of cylindrical interfaces
NASA Astrophysics Data System (ADS)
Goujon, Florent; Bêche, Bruno; Malfreyt, Patrice; Ghoufi, Aziz
2018-03-01
In this work, we implement for the first time the radial-based tail methods for Monte Carlo simulations of cylindrical interfaces. The efficiency of this method is then evaluated through the calculation of surface tension and coexisting properties. We show that the inclusion of tail corrections during the course of the Monte Carlo simulation impacts the coexisting and the interfacial properties. We establish that the long range corrections to the surface tension are the same order of magnitude as those obtained from planar interface. We show that the slab-based tail method does not amend the localization of the Gibbs equimolar dividing surface. Additionally, a non-monotonic behavior of surface tension is exhibited as a function of the radius of the equimolar dividing surface.
Dynamic Monte Carlo description of thermal desorption processes
NASA Astrophysics Data System (ADS)
Weinketz, Sieghard
1994-07-01
The applicability of the dynamic Monte Carlo method of Fichthorn and Weinberg, in which the time evolution of a system is described in terms of the absolute number of different microscopic possible events and their associated transition rates, is discussed for the case of thermal desorption simulations. It is shown that the definition of the time increment at each successful event leads naturally to the macroscopic differential equation of desorption, in the case of simple first- and second-order processes in which the only possible events are desorption and diffusion. This equivalence is numerically demonstrated for a second-order case. In the sequence, the equivalence of this method with the Monte Carlo method of Sales and Zgrablich for more complex desorption processes, allowing for lateral interactions between adsorbates, is shown, even though the dynamic Monte Carlo method does not bear their limitation of a rapid surface diffusion condition, thus being able to describe a more complex ``kinetics'' of surface reactive processes, and therefore be applied to a wider class of phenomena, such as surface catalysis.
NASA Astrophysics Data System (ADS)
Schröder, Markus; Meyer, Hans-Dieter
2017-08-01
We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.
NASA Astrophysics Data System (ADS)
Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio
2016-09-01
A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.
NASA Astrophysics Data System (ADS)
Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.
2012-06-01
A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
NASA Astrophysics Data System (ADS)
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
Tringe, J. W.; Ileri, N.; Levie, H. W.; ...
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more » Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less
Vertical incidence of slow Ne 10+ ions on an LiF surface: Suppression of the trampoline effect
NASA Astrophysics Data System (ADS)
Wirtz, Ludger; Lemell, Christoph; Reinhold, Carlos O.; Hägg, Lotten; Burgdörfer, Joachim
2001-08-01
We present a Monte Carlo simulation of the neutralization of a slow Ne 10+ ion in vertical incidence on an LiF(1 0 0) surface. The rates for resonant electron transfer between surface F - ions and the projectile are calculated using a classical trajectory Monte Carlo simulation. We investigate the influence of the hole mobility on the neutralization sequence. It is shown that backscattering above the surface due to the local positive charge up of the surface ("trampoline effect") does not take place.
Yoo, Brian; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Jusufi, Arben; Maginn, Edward J
2017-09-26
We present a newly developed Monte Carlo scheme to predict bulk surfactant concentrations and surface tensions at the air-water interface for various surfactant interfacial coverages. Since the concentration regimes of these systems of interest are typically very dilute (≪10 -5 mol. frac.), Monte Carlo simulations with the use of insertion/deletion moves can provide the ability to overcome finite system size limitations that often prohibit the use of modern molecular simulation techniques. In performing these simulations, we use the discrete fractional component Monte Carlo (DFCMC) method in the Gibbs ensemble framework, which allows us to separate the bulk and air-water interface into two separate boxes and efficiently swap tetraethylene glycol surfactants C 10 E 4 between boxes. Combining this move with preferential translations, volume biased insertions, and Wang-Landau biasing vastly enhances sampling and helps overcome the classical "insertion problem", often encountered in non-lattice Monte Carlo simulations. We demonstrate that this methodology is both consistent with the original molecular thermodynamic theory (MTT) of Blankschtein and co-workers, as well as their recently modified theory (MD/MTT), which incorporates the results of surfactant infinite dilution transfer free energies and surface tension calculations obtained from molecular dynamics simulations.
Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.
2012-11-01
A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.
Monte Carlo modeling of the MammoSite(Reg) treatments: Dose effects of air pockets
NASA Astrophysics Data System (ADS)
Huang, Yu-Huei Jessica
In the treatment of early-stage breast cancer, MammoSiteRTM has been used as one of the partial breast irradiation techniques after breast-conserving surgery. The MammoSiteRTM applicator is a single catheter with an inflatable balloon at its distal end that can be placed in the resected cavity (tumor bed). The treatment is performed by delivering the Ir-192 high-dose-rate source through the center lumen of the catheter by a remote afterloader while the balloon is inflated in the tumor bed cavity. In the MammoSiteRTM treatment, it has been found that air pockets occasionally exist and can be seen and measured in CT images. Experiences have shown that about 90% of the patients have air pockets when imaged two days after the balloon placement. The criterion for the air pocket volume is less than or equal to 10% of the planning target volume in volume. The purpose of this study is to quantify dose errors occurring at the interface of the air pocket in MammoSiteRTM treatments with Monte Carlo calculations, so that the dosimetric effects from the air pocket can be fully understood. Modern brachytherapy treatment planning systems typically consider patient anatomy as a homogeneous water medium, and incorrectly model lateral and backscatter radiation during treatment delivery. Heterogeneities complicate the problem and may result in overdosage to the tissue located near the medium interface. This becomes a problem in MammoSiteRTM brachytherapy when air pocket appears during the treatment. The resulting percentage dose difference near the air-tissue interface is hypothesized to be greater than 10% when comparing Monte Carlo N-Particle (version 5) with current treatment planning systems. The specific aims for this study are: (1) Validate Monte Carlo N-Particle (Version 5) source modeling. (2) Develop phantom. (3) Calculate phantom doses with Monte Carlo N-Particle (Version 5) and investigate doses difference between thermoluminescent dosimeter measurement, treatment planning system, and Monte Carlo results. (4) Calculate dose differences for various treatment parameters. The results from thermoliminescent dosimeter phantom measurements proves that with correct geometric and source models, Monte Carlo method can be used to estimate homogeneity and heterogeneity doses in MammoSiteRTM treatment. The resulting dose differences at various points of interests in Monte Carlo calculations were presented and compared between different calculation methods. The air pocket doses were found to be underestimated by the treatment planning system. It was concluded that after correcting for inverse square law, the underestimation error from the treatment planning system will be less than +/- 2.0%, and +/- 3.5%, at the air pocket surface and air pocket planning target volume, respectively, when comparing Monte Carlo N-Particle (version 5) results. If the skin surface is located close to the air pocket, the underestimation effect at the air pocket surface and air pocket planning target volume doses becomes less because the air outside of the skin surface reduces the air pocket inhomogeneity effect. In order to maintain appropriate skin dose within tolerance, the skin surface criterion should be considered as the smallest thickness of the breast tissue located between the air pocket and the skin surface. The thickness should be at least 5 mm. In conclusion, the air pocket outside the balloon had less than 10% inhomogeneity effect based on the situations studied. It is recommended that at least an inverse square correction should be taken into consideration in order to relate clinical outcomes to actual delivered doses to the air pocket and surrounding tissues.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less
2014-03-27
mass and surface area, Equation 12 demonstrates an energy balance for the material, assuming the rest of the surfaces of the material are isothermal...radiation in order to dissipate heat from 18 the spacecraft [8]. As discussed in the system thermal energy balance defined previously, emission of IR... energy balance calculations will be utilized. The Monte Carlo/Ray Trace Radiation Method The Monte Carlo/Ray Trace method is utilized in order to
Schaefer, C; Jansen, A P J
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
SABRINA: an interactive solid geometry modeling program for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.
SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6
2013-01-15
Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nemchinsky, V.; Khrabry, A.
Trajectories of a polarizable species (atoms or molecules) in the vicinity of a negatively charged nanoparticle (at a floating potential) are considered. The atoms are pulled into regions of strong electric field by polarization forces. The polarization increases the deposition rate of the atoms and molecules at the nanoparticle. The effect of the non-spherical shape of the nanoparticle is investigated by the Monte Carlo method. The shape of the non-spherical nanoparticle is approximated by an ellipsoid. The total deposition rate and its flux density distribution along the nanoparticle surface are calculated. As a result, it is shown that the fluxmore » density is not uniform along the surface. It is maximal at the nanoparticle tips.« less
Nemchinsky, V.; Khrabry, A.
2018-02-01
Trajectories of a polarizable species (atoms or molecules) in the vicinity of a negatively charged nanoparticle (at a floating potential) are considered. The atoms are pulled into regions of strong electric field by polarization forces. The polarization increases the deposition rate of the atoms and molecules at the nanoparticle. The effect of the non-spherical shape of the nanoparticle is investigated by the Monte Carlo method. The shape of the non-spherical nanoparticle is approximated by an ellipsoid. The total deposition rate and its flux density distribution along the nanoparticle surface are calculated. As a result, it is shown that the fluxmore » density is not uniform along the surface. It is maximal at the nanoparticle tips.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.
Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).
Harding, Lawrence B; Georgievskii, Yuri; Klippenstein, Stephen J
2017-06-08
Full-dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion-related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic zero-point energies. The resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower-level electronic structure methods (B3LYP and MP2).
Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.
2017-05-17
Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).
Monte Carlo sampling of Wigner functions and surface hopping quantum dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kube, Susanna; Lasser, Caroline; Weber, Marcus
2009-04-01
The article addresses the achievable accuracy for a Monte Carlo sampling of Wigner functions in combination with a surface hopping algorithm for non-adiabatic quantum dynamics. The approximation of Wigner functions is realized by an adaption of the Metropolis algorithm for real-valued functions with disconnected support. The integration, which is necessary for computing values of the Wigner function, uses importance sampling with a Gaussian weight function. The numerical experiments agree with theoretical considerations and show an error of 2-3%.
Monte Carlo simulation to investigate the formation of molecular hydrogen and its deuterated forms
NASA Astrophysics Data System (ADS)
Sahu, Dipen; Das, Ankan; Majumdar, Liton; Chakrabarti, Sandip K.
2015-07-01
H2 is the most abundant interstellar species, and its deuterated forms (HD and D2) are also present in high abundance. The high abundance of these molecules could be explained by considering the chemistry that occurs on interstellar dust. Because of its simplicity, the rate equation method is widely used to study the formation of grain-surface species. However, because the recombination efficiency for the formation of any surface species is highly dependent on various physical and chemical parameters, the Monte Carlo method is best suited for addressing the randomness of the processes. We perform Monte Carlo simulations to study the formation of H2, HD and D2 on interstellar ice. The adsorption energies of surface species are the key inputs for the formation of any species on interstellar dusts, but the binding energies of deuterated species have yet to be determined with certainty. A zero-point energy correction exists between hydrogenated and deuterated species, which should be considered during modeling of the chemistry on interstellar dusts. Following some previous studies, we consider various sets of adsorption energies to investigate the formation of these species under diverse physical conditions. As expected, notable differences in these two approaches (rate equation method and Monte Carlo method) are observed for the production of these simple molecules on interstellar ice. We introduce two factors, namely, Sf and β , to explain these discrepancies: Sf is a scaling factor, which can be used to correlate the discrepancies between the rate equation and Monte Carlo methods, and β indicates the formation efficiency under various conditions. Higher values of β indicate a lower production efficiency. We observed that β increases with a decrease in the rate of accretion from the gas phase to the grain phase.
Increased dose near the skin due to electromagnetic surface beacon transponder.
Ahn, Kang-Hyun; Manger, Ryan; Halpern, Howard J; Aydogan, Bulent
2015-05-08
The purpose of this study was to evaluate the increased dose near the skin from an electromagnetic surface beacon transponder, which is used for localization and tracking organ motion. The bolus effect due to the copper coil surface beacon was evaluated with radiographic film measurements and Monte Carlo simulations. Various beam incidence angles were evaluated for both 6 MV and 18 MV experimentally. We performed simulations using a general-purpose Monte Carlo code MCNPX (Monte Carlo N-Particle) to supplement the experimental data. We modeled the surface beacon geometry using the actual mass of the glass vial and copper coil placed in its L-shaped polyethylene terephthalate tubing casing. Film dosimetry measured factors of 2.2 and 3.0 enhancement in the surface dose for normally incident 6 MV and 18 MV beams, respectively. Although surface dose further increased with incidence angle, the relative contribution from the bolus effect was reduced at the oblique incidence. The enhancement factors were 1.5 and 1.8 for 6 MV and 18 MV, respectively, at an incidence angle of 60°. Monte Carlo simulation confirmed the experimental results and indicated that the epidermal skin dose can reach approximately 50% of the dose at dmax at normal incidence. The overall effect could be acceptable considering the skin dose enhancement is confined to a small area (~ 1 cm2), and can be further reduced by using an opposite beam technique. Further clinical studies are justified in order to study the dosimetric benefit versus possible cosmetic effects of the surface beacon. One such clinical situation would be intact breast radiation therapy, especially large-breasted women.
Electrosorption of a modified electrode in the vicinity of phase transition: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Gavilán Arriazu, E. M.; Pinto, O. A.
2018-03-01
We present a Monte Carlo study for the electrosorption of an electroactive species on a modified electrode. The surface of the electrode is modified by the irreversible adsorption of a non-electroactive species which is able to block a percentage of the adsorption sites. This generates an electrode with variable connectivity sites. A second species, electroactive in this case, is adsorbed in surface vacancies and can interact repulsively with itself. In particular, we are interested in the analysis of the effect of the non-electroactive species near of critical regime, where the c(2 × 2) structure is formed. Lattice-gas models and Monte Carlo simulations in the Gran Canonical Ensemble are used. The analysis conducted is based on the study of voltammograms, order parameters, isotherms, configurational entropy per site, at several values of energies and coverage degrees of the non-electroactive species.
A Lattice Kinetic Monte Carlo Solver for First-Principles Microkinetic Trend Studies
Hoffmann, Max J.; Bligaard, Thomas
2018-01-22
Here, mean-field microkinetic models in combination with Brønsted–Evans–Polanyi like scaling relations have proven highly successful in identifying catalyst materials with good or promising reactivity and selectivity. Analysis of the microkinetic model by means of lattice kinetic Monte Carlo promises a faithful description of a range of atomistic features involving short-range ordering of species in the vicinity of an active site. In this paper, we use the “fruit fly” example reaction of CO oxidation on fcc(111) transition and coinage metals to motivate and develop a lattice kinetic Monte Carlo solver suitable for the numerically challenging case of vastly disparate rate constants.more » As a result, we show that for the case of infinitely fast diffusion and absence of adsorbate-adsorbate interaction it is, in fact, possible to match the prediction of the mean-field-theory method and the lattice kinetic Monte Carlo method. As a corollary, we conclude that lattice kinetic Monte Carlo simulations of surface chemical reactions are most likely to provide additional insight over mean-field simulations if diffusion limitations or adsorbate–adsorbate interactions have a significant influence on the mixing of the adsorbates.« less
A Lattice Kinetic Monte Carlo Solver for First-Principles Microkinetic Trend Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Bligaard, Thomas
Here, mean-field microkinetic models in combination with Brønsted–Evans–Polanyi like scaling relations have proven highly successful in identifying catalyst materials with good or promising reactivity and selectivity. Analysis of the microkinetic model by means of lattice kinetic Monte Carlo promises a faithful description of a range of atomistic features involving short-range ordering of species in the vicinity of an active site. In this paper, we use the “fruit fly” example reaction of CO oxidation on fcc(111) transition and coinage metals to motivate and develop a lattice kinetic Monte Carlo solver suitable for the numerically challenging case of vastly disparate rate constants.more » As a result, we show that for the case of infinitely fast diffusion and absence of adsorbate-adsorbate interaction it is, in fact, possible to match the prediction of the mean-field-theory method and the lattice kinetic Monte Carlo method. As a corollary, we conclude that lattice kinetic Monte Carlo simulations of surface chemical reactions are most likely to provide additional insight over mean-field simulations if diffusion limitations or adsorbate–adsorbate interactions have a significant influence on the mixing of the adsorbates.« less
Accurate Theoretical Predictions of the Properties of Energetic Materials
2008-09-18
decomposition, Monte Carlo, molecular dynamics, supercritical fluids, solvation and separation, quantum Monte Carlo, potential energy surfaces, RDX , TNAZ...labs, who are contributing to the theoretical efforts, providing data for testing of the models, or aiding in the transition of the methods, models...and results to DoD applications. The major goals of the project are: • Models that describe phase transitions and chemical reactions in
NASA Astrophysics Data System (ADS)
Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.
2018-02-01
Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.
Monte Carlo simulation of EAS generated by 10(14) - 10(16) eV protons
NASA Technical Reports Server (NTRS)
Fenyves, E. J.; Yunn, B. C.; Stanev, T.
1985-01-01
Detailed Monte Carlo simulations of extensive air showers to be detected by the Homestake Surface Underground Telescope and other similar detectors located at sea level and mountain altitudes have been performed for 10 to the 14th power to 10 to the 16th power eV primary energies. The results of these Monte Carlo calculations will provide an opportunity to compare the experimental data with different models for the composition and spectra of primaries and for the development of air showers. The results obtained for extensive air showers generated by 10 to the 14th power to 10 to the 16th power eV primary protons are reported.
Madurga, Sergio; Martín-Molina, Alberto; Vilaseca, Eudald; Mas, Francesc; Quesada-Pérez, Manuel
2007-06-21
The structure of the electric double layer in contact with discrete and continuously charged planar surfaces is studied within the framework of the primitive model through Monte Carlo simulations. Three different discretization models are considered together with the case of uniform distribution. The effect of discreteness is analyzed in terms of charge density profiles. For point surface groups, a complete equivalence with the situation of uniformly distributed charge is found if profiles are exclusively analyzed as a function of the distance to the charged surface. However, some differences are observed moving parallel to the surface. Significant discrepancies with approaches that do not account for discreteness are reported if charge sites of finite size placed on the surface are considered.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T. III
SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.
Monte Carlo simulations of ABC stacked kagome lattice films
NASA Astrophysics Data System (ADS)
Yerzhakov, H. V.; Plumer, M. L.; Whitehead, J. P.
2016-05-01
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed.
Monte Carlo simulation of wave sensing with a short pulse radar
NASA Technical Reports Server (NTRS)
Levine, D. M.; Davisson, L. D.; Kutz, R. L.
1977-01-01
A Monte Carlo simulation is used to study the ocean wave sensing potential of a radar which scatters short pulses at small off-nadir angles. In the simulation, realizations of a random surface are created commensurate with an assigned probability density and power spectrum. Then the signal scattered back to the radar is computed for each realization using a physical optics analysis which takes wavefront curvature and finite radar-to-surface distance into account. In the case of a Pierson-Moskowitz spectrum and a normally distributed surface, reasonable assumptions for a fully developed sea, it has been found that the cumulative distribution of time intervals between peaks in the scattered power provides a measure of surface roughness. This observation is supported by experiments.
NASA Astrophysics Data System (ADS)
Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.
2018-05-01
Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although the observed seasonal cycles were found to fall within the confidence limits of the ACCMIP members, this was because the model seasonal cycles spanned extremely wide ranges and there was no single ACCMIP member that performed best for each station. Further work is required to examine the parameterisation of convective mixing in the models to see if this erodes the isolation of the marine boundary layer from the free troposphere and thus hides the models' real ability to reproduce ozone seasonal cycles over marine stations.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J.
1983-01-01
The atmospheric radiative transfer calculation program (ATARD) and its supporting programs (setting up atmospheric profile, making Mie tables and an exponential-sum-fitting table) were completed. More sophisticated treatment of aerosol scattering (including angular phase function or asymmetric factor) and multichannel analysis of results from ATRAD are being developed. Some progress was made on a Monte Carlo program for examining two dimensional effects, specifically a surface boundary condition that varies across a scene. The MONTE program combines ATRAD and the Monte Carlo method together to produce an atmospheric point spread function. Currently the procedure passes monochromatic tests and the results are reasonable.
NASA Astrophysics Data System (ADS)
Gugsa, Solomon A.; Davies, Angela
2005-08-01
Characterizing an aspheric micro lens is critical for understanding the performance and providing feedback to the manufacturing. We describe a method to find the best-fit conic of an aspheric micro lens using a least squares minimization and Monte Carlo analysis. Our analysis is based on scanning white light interferometry measurements, and we compare the standard rapid technique where a single measurement is taken of the apex of the lens to the more time-consuming stitching technique where more surface area is measured. Both are corrected for tip/tilt based on a planar fit to the substrate. Four major parameters and their uncertainties are estimated from the measurement and a chi-square minimization is carried out to determine the best-fit conic constant. The four parameters are the base radius of curvature, the aperture of the lens, the lens center, and the sag of the lens. A probability distribution is chosen for each of the four parameters based on the measurement uncertainties and a Monte Carlo process is used to iterate the minimization process. Eleven measurements were taken and data is also chosen randomly from the group during the Monte Carlo simulation to capture the measurement repeatability. A distribution of best-fit conic constants results, where the mean is a good estimate of the best-fit conic and the distribution width represents the combined measurement uncertainty. We also compare the Monte Carlo process for the stitched data and the not stitched data. Our analysis allows us to analyze the residual surface error in terms of Zernike polynomials and determine uncertainty estimates for each coefficient.
NASA Technical Reports Server (NTRS)
Tucker, O. J.; Farrell, W. M.; Killen, R. M.; Hurley, D. M.
2018-01-01
Recently, the near-infrared observations of the OH veneer on the lunar surface by the Moon Mineralogy Mapper (M3) have been refined to constrain the OH content to 500-750 parts per million (ppm). The observations indicate diurnal variations in OH up to 200 ppm possibly linked to warmer surface temperatures at low latitude. We examine the M3 observations using a statistical mechanics approach to model the diffusion of implanted H in the lunar regolith. We present results from Monte Carlo simulations of the diffusion of implanted solar wind H atoms and the subsequently derived H and H2 exospheres.
NASA Astrophysics Data System (ADS)
Truong, Thanh N.; Stefanovich, Eugene V.
1997-05-01
We present a study of micro-solvation of Cl anion by water clusters of the size up to seven molecules using a perturbative Monte Carlo approach with a hybrid HF/MM potential. In this approach, a perturbation theory was used to avoid performing full SCF calculations at every Monte Carlo step. In this study, the anion is treated quantum mechanically at the HF/6-31G ∗ level of theory while interactions between solvent waters are presented by the TIP3P potential force field. Analysis on the solvent induced dipole moment of the ion indicates that the Cl anion resides most of the time on the surface of the clusters. Accuracy of the perturbative MC approach is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez, Liliana; Slutzky, Claudia; Ferron, Julio
2005-06-15
The explosive features of highly cohesive materials growing over soft metal substrates can be suppressed, and layer by layer growth achieved, by controlling a slight alloying at the interface. By means of Monte Carlo and molecular dynamic simulations, applied to the Co/Cu(111) system, we have proved the stability reached by Co growing over an alloyed CoCu surface, and the factibility of using hyperthermal ions to tailor this alloying.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger
2008-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael
2007-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
NASA Astrophysics Data System (ADS)
Bottaini, C.; Mirão, J.; Figuereido, M.; Candeias, A.; Brunetti, A.; Schiavon, N.
2015-01-01
Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample.
Self-evolving atomistic kinetic Monte Carlo simulations of defects in materials
Xu, Haixuan; Beland, Laurent K.; Stoller, Roger E.; ...
2015-01-29
The recent development of on-the-fly atomistic kinetic Monte Carlo methods has led to an increased amount attention on the methods and their corresponding capabilities and applications. In this review, the framework and current status of Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC) are discussed. SEAKMC particularly focuses on defect interaction and evolution with atomistic details without assuming potential defect migration/interaction mechanisms and energies. The strength and limitation of using an active volume, the key concept introduced in SEAKMC, are discussed. Potential criteria for characterizing an active volume are discussed and the influence of active volume size on saddle point energies ismore » illustrated. A procedure starting with a small active volume followed by larger active volumes was found to possess higher efficiency. Applications of SEAKMC, ranging from point defect diffusion, to complex interstitial cluster evolution, to helium interaction with tungsten surfaces, are summarized. A comparison of SEAKMC with molecular dynamics and conventional object kinetic Monte Carlo is demonstrated. Overall, SEAKMC is found to be complimentary to conventional molecular dynamics, especially when the harmonic approximation of transition state theory is accurate. However it is capable of reaching longer time scales than molecular dynamics and it can be used to systematically increase the accuracy of other methods such as object kinetic Monte Carlo. Furthermore, the challenges and potential development directions are also outlined.« less
NASA Astrophysics Data System (ADS)
Urbic, T.; Holovko, M. F.
2011-10-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.
Urbic, T.; Holovko, M. F.
2011-01-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes–Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334
Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories
NASA Technical Reports Server (NTRS)
Olds, John; Way, David
2001-01-01
Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial-and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.
NASA Technical Reports Server (NTRS)
Gayda, J.; Srolovitz, D. J.
1989-01-01
This paper presents a specialized microstructural lattice model, MCFET (Monte Carlo finite element technique), which simulates microstructural evolution in materials in which strain energy has an important role in determining morphology. The model is capable of accounting for externally applied stress, surface tension, misfit, elastic inhomogeneity, elastic anisotropy, and arbitrary temperatures. The MCFET analysis was found to compare well with the results of analytical calculations of the equilibrium morphologies of isolated particles in an infinite matrix.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
Reconstruction of Human Monte Carlo Geometry from Segmented Images
NASA Astrophysics Data System (ADS)
Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican
2014-06-01
Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified
NASA Astrophysics Data System (ADS)
Castells, Victoria; Van Tassel, Paul R.
2005-02-01
Proteins often undergo changes in internal conformation upon interacting with a surface. We investigate the thermodynamics of surface induced conformational change in a lattice model protein using a multicanonical Monte Carlo method. The protein is a linear heteropolymer of 27 segments (of types A and B) confined to a cubic lattice. The segmental order and nearest neighbor contact energies are chosen to yield, in the absence of an adsorbing surface, a unique 3×3×3 folded structure. The surface is a plane of sites interacting either equally with A and B segments (equal affinity surface) or more strongly with the A segments (A affinity surface). We use a multicanonical Monte Carlo algorithm, with configuration bias and jump walking moves, featuring an iteratively updated sampling function that converges to the reciprocal of the density of states 1/Ω(E), E being the potential energy. We find inflection points in the configurational entropy, S(E)=klnΩ(E), for all but a strongly adsorbing equal affinity surface, indicating the presence of free energy barriers to transition. When protein-surface interactions are weak, the free energy profiles F(E)=E-TS(E) qualitatively resemble those of a protein in the absence of a surface: a free energy barrier separates a folded, lowest energy state from globular, higher energy states. The surface acts in this case to stabilize the globular states relative to the folded state. When the protein surface interactions are stronger, the situation differs markedly: the folded state no longer occurs at the lowest energy and free energy barriers may be absent altogether.
Role of ion hydration for the differential capacitance of an electric double layer.
Caetano, Daniel L Z; Bossa, Guilherme V; de Oliveira, Vinicius M; Brown, Matthew A; de Carvalho, Sidney J; May, Sylvio
2016-10-12
The influence of soft, hydration-mediated ion-ion and ion-surface interactions on the differential capacitance of an electric double layer is investigated using Monte Carlo simulations and compared to various mean-field models. We focus on a planar electrode surface at physiological concentration of monovalent ions in a uniform dielectric background. Hydration-mediated interactions are modeled on the basis of Yukawa potentials that add to the Coulomb and excluded volume interactions between ions. We present a mean-field model that includes hydration-mediated anion-anion, anion-cation, and cation-cation interactions of arbitrary strengths. In addition, finite ion sizes are accounted for through excluded volume interactions, described either on the basis of the Carnahan-Starling equation of state or using a lattice gas model. Both our Monte Carlo simulations and mean-field approaches predict a characteristic double-peak (the so-called camel shape) of the differential capacitance; its decrease reflects the packing of the counterions near the electrode surface. The presence of hydration-mediated ion-surface repulsion causes a thin charge-depleted region close to the surface, which is reminiscent of a Stern layer. We analyze the interplay between excluded volume and hydration-mediated interactions on the differential capacitance and demonstrate that for small surface charge density our mean-field model based on the Carnahan-Starling equation is able to capture the Monte Carlo simulation results. In contrast, for large surface charge density the mean-field approach based on the lattice gas model is preferable.
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Gianfrancesco, Anthony G; Tselev, Alexander; Baddorf, Arthur P; Kalinin, Sergei V; Vasudevan, Rama K
2015-11-13
The controlled growth of epitaxial films of complex oxides requires an atomistic understanding of key parameters determining final film morphology, such as termination dependence on adatom diffusion, and height of the Ehrlich-Schwoebel (ES) barrier. Here, through an in situ scanning tunneling microscopy study of mixed-terminated La5/8Ca3/8MnO3 (LCMO) films, we image adatoms and observe pile-up at island edges. Image analysis allows determination of the population of adatoms at the edge of islands and fractions on A-site and B-site terminations. A simple Monte-Carlo model, simulating the random walk of adatoms on a sinusoidal potential landscape using Boltzmann statistics is used to reproduce the experimental data, and provides an estimate of the ES barrier as ∼0.18 ± 0.04 eV at T = 1023 K, similar to those of metal adatoms on metallic surfaces. These studies highlight the utility of in situ imaging, in combination with basic Monte-Carlo methods, in elucidating the factors which control the final film growth in complex oxides.
NASA Astrophysics Data System (ADS)
Gianfrancesco, Anthony G.; Tselev, Alexander; Baddorf, Arthur P.; Kalinin, Sergei V.; Vasudevan, Rama K.
2015-11-01
The controlled growth of epitaxial films of complex oxides requires an atomistic understanding of key parameters determining final film morphology, such as termination dependence on adatom diffusion, and height of the Ehrlich-Schwoebel (ES) barrier. Here, through an in situ scanning tunneling microscopy study of mixed-terminated La5/8Ca3/8MnO3 (LCMO) films, we image adatoms and observe pile-up at island edges. Image analysis allows determination of the population of adatoms at the edge of islands and fractions on A-site and B-site terminations. A simple Monte-Carlo model, simulating the random walk of adatoms on a sinusoidal potential landscape using Boltzmann statistics is used to reproduce the experimental data, and provides an estimate of the ES barrier as ˜0.18 ± 0.04 eV at T = 1023 K, similar to those of metal adatoms on metallic surfaces. These studies highlight the utility of in situ imaging, in combination with basic Monte-Carlo methods, in elucidating the factors which control the final film growth in complex oxides.
A Descriptive Guide to Trade Space Analysis
2015-09-01
Development QFD Quality Function Deployment RSM Response Surface Method RSE Response Surface Equation SE Systems Engineering SME Subject Matter...surface equations ( RSEs ) as surrogate models. It uses the RSEs with Monte Carlo simulation to quantitatively explore changes across the surfaces to
NASA Astrophysics Data System (ADS)
Bencheikh, Mohamed; Maghnouj, Abdelmajid; Tajmouati, Jaouad
2017-11-01
The Monte Carlo calculation method is considered to be the most accurate method for dose calculation in radiotherapy and beam characterization investigation, in this study, the Varian Clinac 2100 medical linear accelerator with and without flattening filter (FF) was modelled. The objective of this study was to determine flattening filter impact on particles' energy properties at phantom surface in terms of energy fluence, mean energy, and energy fluence distribution. The Monte Carlo codes used in this study were BEAMnrc code for simulating linac head, DOSXYZnrc code for simulating the absorbed dose in a water phantom, and BEAMDP for extracting energy properties. Field size was 10 × 10 cm2, simulated photon beam energy was 6 MV and SSD was 100 cm. The Monte Carlo geometry was validated by a gamma index acceptance rate of 99% in PDD and 98% in dose profiles, gamma criteria was 3% for dose difference and 3mm for distance to agreement. In without-FF, the energetic properties was as following: electron contribution was increased by more than 300% in energy fluence, almost 14% in mean energy and 1900% in energy fluence distribution, however, photon contribution was increased 50% in energy fluence, and almost 18% in mean energy and almost 35% in energy fluence distribution. The removing flattening filter promotes the increasing of electron contamination energy versus photon energy; our study can contribute in the evolution of removing flattening filter configuration in future linac.
Cellular dosimetry calculations for Strontium-90 using Monte Carlo code PENELOPE.
Hocine, Nora; Farlay, Delphine; Boivin, Georges; Franck, Didier; Agarande, Michelle
2014-11-01
To improve risk assessments associated with chronic exposure to Strontium-90 (Sr-90), for both the environment and human health, it is necessary to know the energy distribution in specific cells or tissue. Monte Carlo (MC) simulation codes are extremely useful tools for calculating deposition energy. The present work was focused on the validation of the MC code PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and the assessment of dose distribution to bone marrow cells from punctual Sr-90 source localized within the cortical bone part. S-values (absorbed dose per unit cumulated activity) calculations using Monte Carlo simulations were performed by using PENELOPE and Monte Carlo N-Particle eXtended (MCNPX). Cytoplasm, nucleus, cell surface, mouse femur bone and Sr-90 radiation source were simulated. Cells are assumed to be spherical with the radii of the cell and cell nucleus ranging from 2-10 μm. The Sr-90 source is assumed to be uniformly distributed in cell nucleus, cytoplasm and cell surface. The comparison of S-values calculated with PENELOPE to MCNPX results and the Medical Internal Radiation Dose (MIRD) values agreed very well since the relative deviations were less than 4.5%. The dose distribution to mouse bone marrow cells showed that the cells localized near the cortical part received the maximum dose. The MC code PENELOPE may prove useful for cellular dosimetry involving radiation transport through materials other than water, or for complex distributions of radionuclides and geometries.
A coarse-grained Monte Carlo approach to diffusion processes in metallic nanoparticles
NASA Astrophysics Data System (ADS)
Hauser, Andreas W.; Schnedlitz, Martin; Ernst, Wolfgang E.
2017-06-01
A kinetic Monte Carlo approach on a coarse-grained lattice is developed for the simulation of surface diffusion processes of Ni, Pd and Au structures with diameters in the range of a few nanometers. Intensity information obtained via standard two-dimensional transmission electron microscopy imaging techniques is used to create three-dimensional structure models as input for a cellular automaton. A series of update rules based on reaction kinetics is defined to allow for a stepwise evolution in time with the aim to simulate surface diffusion phenomena such as Rayleigh breakup and surface wetting. The material flow, in our case represented by the hopping of discrete portions of metal on a given grid, is driven by the attempt to minimize the surface energy, which can be achieved by maximizing the number of filled neighbor cells.
Urbic, T; Holovko, M F
2011-10-07
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. © 2011 American Institute of Physics
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Summing Feynman graphs by Monte Carlo: Planar ϕ3-theory and dynamically triangulated random surfaces
NASA Astrophysics Data System (ADS)
Boulatov, D. V.; Kazakov, V. A.
1988-12-01
New combinatorial identities are suggested relating the ratio of (n - 1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γstr (string susceptibility) in planar ϕ3-theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D = 1 the exact critical properties of the theory are reproduced numerically. After August 3, 1988 the address will be: Cybernetics Council, Academy of Science, ul. Vavilova 40, 117333 Moscow, USSR.
Modeling the migration of platinum nanoparticles on surfaces using a kinetic Monte Carlo approach
Li, Lin; Plessow, Philipp N.; Rieger, Michael; ...
2017-02-15
We propose a kinetic Monte Carlo (kMC) model for simulating the movement of platinum particles on supports, based on atom-by-atom diffusion on the surface of the particle. The proposed model was able to reproduce equilibrium cluster shapes predicted using Wulff-construction. The diffusivity of platinum particles was simulated both purely based on random motion and assisted using an external field that causes a drift velocity. The overall particle diffusivity increases with temperature; however, the extracted activation barrier appears to be temperature independent. Additionally, this barrier was found to increase with particle size, as well as, with the adhesion between the particlemore » and the support.« less
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
(U) Introduction to Monte Carlo Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hungerford, Aimee L.
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Monte Carlo Modeling of VLWIR HgCdTe Interdigitated Pixel Response
NASA Astrophysics Data System (ADS)
D'Souza, A. I.; Stapelbroek, M. G.; Wijewarnasuriya, P. S.
2010-07-01
Increasing very long-wave infrared (VLWIR, λ c ≈ 15 μm) pixel operability was approached by subdividing each pixel into four interdigitated subpixels. High response is maintained across the pixel, even if one or two interdigitated subpixels are deselected (turned off), because interdigitation provides that the preponderance of minority carriers photogenerated in the pixel are collected by the selected subpixels. Monte Carlo modeling of the photoresponse of the interdigitated subpixel simulates minority-carrier diffusion from carrier creation to recombination. Each carrier generated at an appropriately weighted random location is assigned an exponentially distributed random lifetime τ i, where < τ i> is the bulk minority-carrier lifetime. The minority carrier is allowed to diffuse for a short time d τ, and the fate of the carrier is decided from its present position and the boundary conditions, i.e., whether the carrier is absorbed in a junction, recombined at a surface, reflected from a surface, or recombined in the bulk because it lived for its designated lifetime. If nothing happens, the process is then repeated until one of the boundary conditions is attained. The next step is to go on to the next carrier and repeat the procedure for all the launches of minority carriers. For each minority carrier launched, the original location and boundary condition at fatality are recorded. An example of the results from Monte Carlo modeling is that, for a 20- μm diffusion length, the calculated quantum efficiency (QE) changed from 85% with no subpixels deselected, to 78% with one subpixel deselected, 67% with two subpixels deselected, and 48% with three subpixels deselected. Demonstration of the interdigitated pixel concept and verification of the Monte Carlo modeling utilized λ c(60 K) ≈ 15 μm HgCdTe pixels in a 96 × 96 array format. The measured collection efficiency for one, two, and three subelements selected, divided by the collection efficiency for all four subelements selected, matched that calculated using Monte Carlo modeling.
Nagasaka, Masanari; Kondoh, Hiroshi; Nakai, Ikuyo; Ohta, Toshiaki
2007-01-28
The dynamics of adsorbate structures during CO oxidation on Pt(111) surfaces and its effects on the reaction were studied by the dynamic Monte Carlo method including lateral interactions of adsorbates. The lateral interaction energies between adsorbed species were calculated by the density functional theory method. Dynamic Monte Carlo simulations were performed for the oxidation reaction over a mesoscopic scale, where the experimentally determined activation energies of elementary paths were altered by the calculated lateral interaction energies. The simulated results reproduced the characteristics of the microscopic and mesoscopic scale adsorbate structures formed during the reaction, and revealed that the complicated reaction kinetics is comprehensively explained by a single reaction path affected by the surrounding adsorbates. We also propose from the simulations that weakly adsorbed CO molecules at domain boundaries promote the island-periphery specific reaction.
Prokhorov, Alexander; Prokhorova, Nina I
2012-11-20
We applied the bidirectional reflectance distribution function (BRDF) model consisting of diffuse, quasi-specular, and glossy components to the Monte Carlo modeling of spectral effective emissivities for nonisothermal cavities. A method for extension of a monochromatic three-component (3C) BRDF model to a continuous spectral range is proposed. The initial data for this method are the BRDFs measured in the plane of incidence at a single wavelength and several incidence angles and directional-hemispherical reflectance measured at one incidence angle within a finite spectral range. We proposed the Monte Carlo algorithm for calculation of spectral effective emissivities for nonisothermal cavities whose internal surface is described by the wavelength-dependent 3C BRDF model. The results obtained for a cylindroconical nonisothermal cavity are discussed and compared with results obtained using the conventional specular-diffuse model.
Study of optical and electronic properties of nickel from reflection electron energy loss spectra
NASA Astrophysics Data System (ADS)
Xu, H.; Yang, L. H.; Da, B.; Tóth, J.; Tőkési, K.; Ding, Z. J.
2017-09-01
We use the classical Monte Carlo transport model of electrons moving near the surface and inside solids to reproduce the measured reflection electron energy-loss spectroscopy (REELS) spectra. With the combination of the classical transport model and the Markov chain Monte Carlo (MCMC) sampling of oscillator parameters the so-called reverse Monte Carlo (RMC) method was developed, and used to obtain optical constants of Ni in this work. A systematic study of the electronic and optical properties of Ni has been performed in an energy loss range of 0-200 eV from the measured REELS spectra at primary energies of 1000 eV, 2000 eV and 3000 eV. The reliability of our method was tested by comparing our results with the previous data. Moreover, the accuracy of our optical data has been confirmed by applying oscillator strength-sum rule and perfect-screening-sum rule.
Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians
NASA Astrophysics Data System (ADS)
Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan
2018-02-01
Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.
Ma, L X; Wang, F Q; Wang, C A; Wang, C C; Tan, J Y
2015-11-20
Spectral properties of sea foam greatly affect ocean color remote sensing and aerosol optical thickness retrieval from satellite observation. This paper presents a combined Mie theory and Monte Carlo method to investigate visible and near-infrared spectral reflectance and bidirectional reflectance distribution function (BRDF) of sea foam layers. A three-layer model of the sea foam is developed in which each layer is composed of large air bubbles coated with pure water. A pseudo-continuous model and Mie theory for coated spheres is used to determine the effective radiative properties of sea foam. The one-dimensional Cox-Munk surface roughness model is used to calculate the slope density functions of the wind-blown ocean surface. A Monte Carlo method is used to solve the radiative transfer equation. Effects of foam layer thickness, bubble size, wind speed, solar zenith angle, and wavelength on the spectral reflectance and BRDF are investigated. Comparisons between previous theoretical results and experimental data demonstrate the feasibility of our proposed method. Sea foam can significantly increase the spectral reflectance and BRDF of the sea surface. The absorption coefficient of seawater near the surface is not the only parameter that influences the spectral reflectance. Meanwhile, the effects of bubble size, foam layer thickness, and solar zenith angle also cannot be obviously neglected.
Trade Space Analysis: Rotational Analyst Research Project
2015-09-01
POM Program Objective Memoranda PM Program Manager RFP Request for Proposal ROM Rough Order Magnitude RSM Response Surface Method RSE ...response surface method (RSM) / response surface equations ( RSEs ) as surrogate models. It uses the RSEs with Monte Carlo simulation to quantitatively
Radon detection in conical diffusion chambers: Monte Carlo calculations and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rickards, J.; Golzarri, J. I.; Espinosa, G., E-mail: espinosa@fisica.unam.mx
2015-07-23
The operation of radon detection diffusion chambers of truncated conical shape was studied using Monte Carlo calculations. The efficiency was studied for alpha particles generated randomly in the volume of the chamber, and progeny generated randomly on the interior surface, which reach track detectors placed in different positions within the chamber. Incidence angular distributions, incidence energy spectra and path length distributions are calculated. Cases studied include different positions of the detector within the chamber, varying atmospheric pressure, and introducing a cutoff incidence angle and energy.
Xiang, Hong F; Song, Jun S; Chin, David W H; Cormack, Robert A; Tishler, Roy B; Makrigiorgos, G Mike; Court, Laurence E; Chin, Lee M
2007-04-01
This work is intended to investigate the application and accuracy of micro-MOSFET for superficial dose measurement under clinically used MV x-ray beams. Dose response of micro-MOSFET in the build-up region and on surface under MV x-ray beams were measured and compared to Monte Carlo calculations. First, percentage-depth-doses were measured with micro-MOSFET under 6 and 10 MV beams of normal incidence onto a flat solid water phantom. Micro-MOSFET data were compared with the measurements from a parallel plate ionization chamber and Monte Carlo dose calculation in the build-up region. Then, percentage-depth-doses were measured for oblique beams at 0 degrees-80 degrees onto the flat solid water phantom with micro-MOSFET placed at depths of 2 cm, 1 cm, and 2 mm below the surface. Measurements were compared to Monte Carlo calculations under these settings. Finally, measurements were performed with micro-MOSFET embedded in the first 1 mm layer of bolus placed on a flat phantom and a curved phantom of semi-cylindrical shape. Results were compared to superficial dose calculated from Monte Carlo for a 2 mm thin layer that extends from the surface to a depth of 2 mm. Results were (1) Comparison of measurements with MC calculation in the build-up region showed that micro-MOSFET has a water-equivalence thickness (WET) of 0.87 mm for 6 MV beam and 0.99 mm for 10 MV beam from the flat side, and a WET of 0.72 mm for 6 MV beam and 0.76 mm for 10 MV beam from the epoxy side. (2) For normal beam incidences, percentage depth dose agree within 3%-5% among micro-MOSFET measurements, parallel-plate ionization chamber measurements, and MC calculations. (3) For oblique incidence on the flat phantom with micro-MOSFET placed at depths of 2 cm, 1 cm, and 2 mm, measurements were consistent with MC calculations within a typical uncertainty of 3%-5%. (4) For oblique incidence on the flat phantom and a curved-surface phantom, measurements with micro-MOSFET placed at 1.0 mm agrees with the MC calculation within 6%, including uncertainties of micro-MOSFET measurements of 2%-3% (1 standard deviation), MOSFET angular dependence of 3.0%-3.5%, and 1%-2% systematical error due to phantom setup geometry asymmetry. Micro-MOSFET can be used for skin dose measurements in 6 and 10 MV beams with an estimated accuracy of +/- 6%.
Dendritic growth shapes in kinetic Monte Carlo models
NASA Astrophysics Data System (ADS)
Krumwiede, Tim R.; Schulze, Tim P.
2017-02-01
For the most part, the study of dendritic crystal growth has focused on continuum models featuring surface energies that yield six pointed dendrites. In such models, the growth shape is a function of the surface energy anisotropy, and recent work has shown that considering a broader class of anisotropies yields a correspondingly richer set of growth morphologies. Motivated by this work, we generalize nanoscale models of dendritic growth based on kinetic Monte Carlo simulation. In particular, we examine the effects of extending the truncation radius for atomic interactions in a bond-counting model. This is done by calculating the model’s corresponding surface energy and equilibrium shape, as well as by running KMC simulations to obtain nanodendritic growth shapes. Additionally, we compare the effects of extending the interaction radius in bond-counting models to that of extending the number of terms retained in the cubic harmonic expansion of surface energy anisotropy in the context of continuum models.
NASA Technical Reports Server (NTRS)
Bentz, Daniel N.; Betush, William; Jackson, Kenneth A.
2003-01-01
In this paper we report on two related topics: Kinetic Monte Carlo simulations of the steady state growth of rod eutectics from the melt, and a study of the surface roughness of binary alloys. We have implemented a three dimensional kinetic Monte Carlo (kMC) simulation with diffusion by pair exchange only in the liquid phase. Entropies of fusion are first chosen to fit the surface roughness of the pure materials, and the bond energies are derived from the equilibrium phase diagram, by treating the solid and liquid as regular and ideal solutions respectively. A simple cubic lattice oriented in the {100} direction is used. Growth of the rods is initiated from columns of pure B material embedded in an A matrix, arranged in a close packed array with semi-periodic boundary conditions. The simulation cells typically have dimensions of 50 by 87 by 200 unit cells. Steady state growth is compliant with the Jackson-Hunt model. In the kMC simulations, using the spin-one Ising model, growth of each phase is faceted or nonfaceted phases depending on the entropy of fusion. There have been many studies of the surface roughening transition in single component systems, but none for binary alloy systems. The location of the surface roughening transition for the phases of a eutectic alloy determines whether the eutectic morphology will be regular or irregular. We have conducted a study of surface roughness on the spin-one Ising Model with diffusion using kMC. The surface roughness was found to scale with the melting temperature of the alloy as given by the liquidus line on the equilibrium phase diagram. The density of missing lateral bonds at the surface was used as a measure of surface roughness.
Recent advances and future prospects for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improvesmore » usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.« less
Backscatter factors and mass energy-absorption coefficient ratios for diagnostic radiology dosimetry
NASA Astrophysics Data System (ADS)
Benmakhlouf, Hamza; Bouchard, Hugo; Fransson, Annette; Andreo, Pedro
2011-11-01
Backscatter factors, B, and mass energy-absorption coefficient ratios, (μen/ρ)w, air, for the determination of the surface dose in diagnostic radiology were calculated using Monte Carlo simulations. The main purpose was to extend the range of available data to qualities used in modern x-ray techniques, particularly for interventional radiology. A comprehensive database for mono-energetic photons between 4 and 150 keV and different field sizes was created for a 15 cm thick water phantom. Backscattered spectra were calculated with the PENELOPE Monte Carlo system, scoring track-length fluence differential in energy with negligible statistical uncertainty; using the Monte Carlo computed spectra, B factors and (μen/ρ)w, air were then calculated numerically for each energy. Weighted averaging procedures were subsequently used to convolve incident clinical spectra with mono-energetic data. The method was benchmarked against full Monte Carlo calculations of incident clinical spectra obtaining differences within 0.3-0.6%. The technique used enables the calculation of B and (μen/ρ)w, air for any incident spectrum without further time-consuming Monte Carlo simulations. The adequacy of the extended dosimetry data to a broader range of clinical qualities than those currently available, while keeping consistency with existing data, was confirmed through detailed comparisons. Mono-energetic and spectra-averaged values were compared with published data, including those in ICRU Report 74 and IAEA TRS-457, finding average differences of 0.6%. Results are provided in comprehensive tables appropriated for clinical use. Additional qualities can easily be calculated using a designed GUI interface in conjunction with software to generate incident photon spectra.
NASA Astrophysics Data System (ADS)
Li, Shenmin; Guo, Hua
2002-09-01
The scattering dynamics of vibrationally excited NO from a metal surface is investigated theoretically using a dissipative model that includes both the neutral and negative ion states. The Liouville-von Neumann equation is solved numerically by a Monte Carlo wave packet method, in which the wave packet is allowed to "jump" between the neutral and negative ion states in a stochastic fashion. It is shown that the temporary population of the negative ion state results in significant changes in vibrational dynamics, which eventually lead to vibrationally inelastic scattering of NO. Reasonable agreement with experiment is obtained with empirical potential energy surfaces. In particular, the experimentally observed facile multiquantum relaxation of the vibrationally highly excited NO is reproduced. The simulation also provides interesting insight into the scattering dynamics.
Probabilistic Thermal Analysis During Mars Reconnaissance Orbiter Aerobraking
NASA Technical Reports Server (NTRS)
Dec, John A.
2007-01-01
A method for performing a probabilistic thermal analysis during aerobraking has been developed. The analysis is performed on the Mars Reconnaissance Orbiter solar array during aerobraking. The methodology makes use of a response surface model derived from a more complex finite element thermal model of the solar array. The response surface is a quadratic equation which calculates the peak temperature for a given orbit drag pass at a specific location on the solar panel. Five different response surface equations are used, one of which predicts the overall maximum solar panel temperature, and the remaining four predict the temperatures of the solar panel thermal sensors. The variables used to define the response surface can be characterized as either environmental, material property, or modeling variables. Response surface variables are statistically varied in a Monte Carlo simulation. The Monte Carlo simulation produces mean temperatures and 3 sigma bounds as well as the probability of exceeding the designated flight allowable temperature for a given orbit. Response surface temperature predictions are compared with the Mars Reconnaissance Orbiter flight temperature data.
Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia
2013-01-01
Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.
2013-07-01
also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32 B. MCNP PHYSICS OPTIONS ......................................................................................... 33 C. HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon
Zarzycki, Piotr; Rosso, Kevin M
2009-06-16
Replica kinetic Monte Carlo simulations were used to study the characteristic time scales of potentiometric titration of the metal oxides and (oxy)hydroxides. The effect of surface heterogeneity and surface transformation on the titration kinetics were also examined. Two characteristic relaxation times are often observed experimentally, with the trailing slower part attributed to surface nonuniformity, porosity, polymerization, amorphization, and other dynamic surface processes induced by unbalanced surface charge. However, our simulations show that these two characteristic relaxation times are intrinsic to the proton-binding reaction for energetically homogeneous surfaces, and therefore surface heterogeneity or transformation does not necessarily need to be invoked. However, all such second-order surface processes are found to intensify the separation and distinction of the two kinetic regimes. The effect of surface energetic-topographic nonuniformity, as well dynamic surface transformation, interface roughening/smoothing were described in a statistical fashion. Furthermore, our simulations show that a shift in the point-of-zero charge is expected from increased titration speed, and the pH-dependence of the titration measurement error is in excellent agreement with experimental studies.
NASA Astrophysics Data System (ADS)
Bubnis, Gregory J.
Since their discovery 25 years ago, carbon fullerenes have been widely studied for their unique physicochemical properties and for applications including organic electronics and photovoltaics. For these applications it is highly desirable for crystalline fullerene thin films to spontaneously self-assemble on surfaces. Accordingly, many studies have functionalized fullerenes with the aim of tailoring their intermolecular interactions and controlling interactions with the solid substrate. The success of these rational design approaches hinges on the subtle interplay of intermolecular forces and molecule-substrate interactions. Molecular modeling is well-suited to studying these interactions by directly simulating self-assembly. In this work, we consider three different fullerene functionalization approaches and for each approach we carry out Monte Carlo simulations of the self-assembly process. In all cases, we use a "coarse-grained" molecular representation that preserves the dominant physical interactions between molecules and maximizes computational efficiency. The first approach we consider is the traditional gold-thiolate SAM (self-assembled monolayer) strategy which tethers molecules to a gold substrate via covalent sulfur-gold bonds. For this we study an asymmetric fullerene thiolate bridged by a phenyl group. Clusters of 40 molecules are simulated on the Au(111) substrate at different temperatures and surface coverage densities. Fullerenes and S atoms are found to compete for Au(111) surface sites, and this competition prevents self-assembly of highly ordered monolayers. Next, we investigate self-assembled monolayers formed by fullerenes with hydrogen-bonding carboxylic acid substituents. We consider five molecules with different dimensions and symmetries. Monte Carlo cooling simulations are used to find the most stable solid structures of clusters adsorbed to Au(111). The results show cases where fullerene-Au(111) attraction, fullerene close-packing, and hydrogen-bonding interactions can cooperate to guide self-assembly or compete to hinder it. Finally, we consider three bis-fullerene molecules, each with a different "bridging group" covalently joining two fullerenes. To effectively study the competing "standing-up" and "lying-down" morphologies, we use Monte Carlo simulations in conjunction with replica exchange and force field biasing methods. For clusters adsorbed to smooth model surfaces, we determine free energy landscapes and demonstrate their utility for rationalizing and predicting self-assembly.
Su, Peiran; Eri, Qitai; Wang, Qiang
2014-04-10
Optical roughness was introduced into the bidirectional reflectance distribution function (BRDF) model to simulate the reflectance characteristics of thermal radiation. The optical roughness BRDF model stemmed from the influence of surface roughness and wavelength on the ray reflectance calculation. This model was adopted to simulate real metal emissivity. The reverse Monte Carlo method was used to display the distribution of reflectance rays. The numerical simulations showed that the optical roughness BRDF model can calculate the wavelength effect on emissivity and simulate the real metal emissivity variance with incidence angles.
Fixed-node quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Anderson, James B.
Quantum Monte Carlo methods cannot at present provide exact solutions of the Schrödinger equation for systems with more than a few electrons. But, quantum Monte Carlo calculations can provide very low energy, highly accurate solutions for many systems ranging up to several hundred electrons. These systems include atoms such as Be and Fe, molecules such as H2O, CH4, and HF, and condensed materials such as solid N2 and solid silicon. The quantum Monte Carlo predictions of their energies and structures may not be `exact', but they are the best available. Most of the Monte Carlo calculations for these systems have been carried out using approximately correct fixed nodal hypersurfaces and they have come to be known as `fixed-node quantum Monte Carlo' calculations. In this paper we review these `fixed node' calculations and the accuracies they yield.
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Prompt Radiation Protection Factors
2018-02-01
dimensional Monte-Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection factors (ratio of dose in the open to...radiation was performed using the three dimensional Monte- Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection...by detonation of a nuclear device have placed renewed emphasis on evaluation of the consequences in case of such an event. The Defense Threat
Ng, Yee-Hong; Bettens, Ryan P A
2016-03-03
Using the method of modified Shepard's interpolation to construct potential energy surfaces of the H2O, O3, and HCOOH molecules, we compute vibrationally averaged isotropic nuclear shielding constants ⟨σ⟩ of the three molecules via quantum diffusion Monte Carlo (QDMC). The QDMC results are compared to that of second-order perturbation theory (PT), to see if second-order PT is adequate for obtaining accurate values of nuclear shielding constants of molecules with large amplitude motions. ⟨σ⟩ computed by the two approaches differ for the hydrogens and carbonyl oxygen of HCOOH, suggesting that for certain molecules such as HCOOH where big displacements away from equilibrium happen (internal OH rotation), ⟨σ⟩ of experimental quality may only be obtainable with the use of more sophisticated and accurate methods, such as quantum diffusion Monte Carlo. The approach of modified Shepard's interpolation is also extended to construct shielding constants σ surfaces of the three molecules. By using a σ surface with the equilibrium geometry as a single data point to compute isotropic nuclear shielding constants for each descendant in the QDMC ensemble representing the ground state wave function, we reproduce the results obtained through ab initio computed σ to within statistical noise. Development of such an approach could thereby alleviate the need for any future costly ab initio σ calculations.
Monte Carlo modeling of spatial coherence: free-space diffraction
Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.
2008-01-01
We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chason, E.; Chan, W. L.; Bharathi, M. S.
Low-energy ion bombardment produces spontaneous periodic structures (sputter ripples) on many surfaces. Continuum theories describe the pattern formation in terms of ion-surface interactions and surface relaxation kinetics, but many features of these models (such as defect concentration) are unknown or difficult to determine. In this work, we present results of kinetic Monte Carlo simulations that model surface evolution using discrete atomistic versions of the physical processes included in the continuum theories. From simulations over a range of parameters, we obtain the dependence of the ripple growth rate, wavelength, and velocity on the ion flux and temperature. The results are discussedmore » in terms of the thermally dependent concentration and diffusivity of ion-induced surface defects. We find that in the early stages of ripple formation the simulation results are surprisingly well described by the predictions of the continuum theory, in spite of simplifying approximations used in the continuum model.« less
Patra, Chandra N
2014-11-14
A systematic investigation of the spherical electric double layers with the electrolytes having size as well as charge asymmetry is carried out using density functional theory and Monte Carlo simulations. The system is considered within the primitive model, where the macroion is a structureless hard spherical colloid, the small ions as charged hard spheres of different size, and the solvent is represented as a dielectric continuum. The present theory approximates the hard sphere part of the one particle correlation function using a weighted density approach whereas a perturbation expansion around the uniform fluid is applied to evaluate the ionic contribution. The theory is in quantitative agreement with Monte Carlo simulation for the density and the mean electrostatic potential profiles over a wide range of electrolyte concentrations, surface charge densities, valence of small ions, and macroion sizes. The theory provides distinctive evidence of charge and size correlations within the electrode-electrolyte interface in spherical geometry.
NASA Astrophysics Data System (ADS)
Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan
2016-04-01
We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
NASA Astrophysics Data System (ADS)
Pérez-Calatayud, J.; Lliso, F.; Ballester, F.; Serrano, M. A.; Lluch, J. L.; Limami, Y.; Puchades, V.; Casal, E.
2001-07-01
The CSM3 137Cs type stainless-steel encapsulated source is widely used in manually afterloaded low dose rate brachytherapy. A specially asymmetric source, CSM3-a, has been designed by CIS Bio International (France) substituting the eyelet side seed with an inactive material in the CSM3 source. This modification has been done in order to allow a uniform dose level over the upper vaginal surface when this `linear' source is inserted at the top of the dome vaginal applicators. In this study the Monte Carlo GEANT3 simulation code, incorporating the source geometry in detail, was used to investigate the dosimetric characteristics of this special CSM3-a 137Cs brachytherapy source. The absolute dose rate distribution in water around this source was calculated and is presented in the form of an along-away table. Comparison of Sievert integral type calculations with Monte Carlo results are discussed.
NASA Astrophysics Data System (ADS)
Crevillén-García, D.; Power, H.
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Power, H.
2017-01-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974
Sarsa, Antonio; Le Sech, Claude
2011-09-13
Variational Monte Carlo method is a powerful tool to determine approximate wave functions of atoms, molecules, and solids up to relatively large systems. In the present work, we extend the variational Monte Carlo approach to study confined systems. Important properties of the atoms, such as the spatial distribution of the electronic charge, the energy levels, or the filling of electronic shells, are modified under confinement. An expression of the energy very similar to the estimator used for free systems is derived. This opens the possibility to study confined systems with little changes in the solution of the corresponding free systems. This is illustrated by the study of helium atom in its ground state (1)S and the first (3)S excited state confined by spherical, cylindrical, and plane impenetrable surfaces. The average interelectronic distances are also calculated. They decrease in general when the confinement is stronger; however, it is seen that they present a minimum for excited states under confinement by open surfaces (cylindrical, planes) around the radii values corresponding to ionization. The ground (2)S and the first (2)P and (2)D excited states of the lithium atom are calculated under spherical constraints for different confinement radii. A crossing between the (2)S and (2)P states is observed around rc = 3 atomic units, illustrating the modification of the atomic energy level under confinement. Finally the carbon atom is studied in the spherical symmetry by using both variational and diffusion Monte Carlo methods. It is shown that the hybridized state sp(3) becomes lower in energy than the ground state (3)P due to a modification and a mixing of the atomic orbitals s, p under strong confinement. This result suggests a model, at least of pedagogical interest, to interpret the basic properties of carbon atom in chemistry.
Monte Carlo simulation of turnover processes in the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
A Monte Carlo model for the gardening of the lunar surface by meteoritic impact is described, and some representative results are given. The model accounts with reasonable success for a wide variety of properties of the regolith. The smoothness of the lunar surface on a scale of centimeters to meters, which was not reproduced in an earlier version of the model, is accounted for by the preferential downward movement of low-energy secondary particles. The time scale for filling lunar grooves and craters by this process is also derived. The experimental bombardment ages (about 4 x 10 to the 8th yr for spallogenic rare gases, about 10 to the 9th yr for neutron capture Gd and Sm isotopes) are not reproduced by the model. The explanation is not obvious.
Quantum Gibbs ensemble Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it; Moroni, Saverio, E-mail: moroni@democritos.it
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudhyadhom, A; McGuinness, C; Descovich, M
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less
Flexible Charged Macromolecules on Mixed Fluid Lipid Membranes: Theory and Monte Carlo Simulations
Tzlil, Shelly; Ben-Shaul, Avinoam
2005-01-01
Fluid membranes containing charged lipids enhance binding of oppositely charged proteins by mobilizing these lipids into the interaction zone, overcoming the concomitant entropic losses due to lipid segregation and lower conformational freedom upon macromolecule adsorption. We study this energetic-entropic interplay using Monte Carlo simulations and theory. Our model system consists of a flexible cationic polyelectrolyte, interacting, via Debye-Hückel and short-ranged repulsive potentials, with membranes containing neutral lipids, 1% tetravalent, and 10% (or 1%) monovalent anionic lipids. Adsorption onto a fluid membrane is invariably stronger than to an equally charged frozen or uniform membrane. Although monovalent lipids may suffice for binding rigid macromolecules, polyvalent counter-lipids (e.g., phosphatidylinositol 4,5 bisphosphate), whose entropy loss upon localization is negligible, are crucial for binding flexible macromolecules, which lose conformational entropy upon adsorption. Extending Rosenbluth's Monte Carlo scheme we directly simulate polymer adsorption on fluid membranes. Yet, we argue that similar information could be derived from a biased superposition of quenched membrane simulations. Using a simple cell model we account for surface concentration effects, and show that the average adsorption probabilities on annealed and quenched membranes coincide at vanishing surface concentrations. We discuss the relevance of our model to the electrostatic-switch mechanism of, e.g., the myristoylated alanine-rich C kinase substrate protein. PMID:16126828
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...
Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
NASA Technical Reports Server (NTRS)
Mahan, J. R.; Eskin, L. D.
1981-01-01
A viable alternative to the net exchange method of radiative analysis which is equally applicable to diffuse and diffuse-specular enclosures is presented. It is particularly more advantageous to use than the net exchange method in the case of a transient thermal analysis involving conduction and storage of energy as well as radiative exchange. A new quantity, called the distribution factor is defined which replaces the angle factor and the configuration factor. Once obtained, the array of distribution factors for an ensemble of surface elements which define an enclosure permits the instantaneous net radiative heat fluxes to all of the surfaces to be computed directly in terms of the known surface temperatures at that instant. The formulation of the thermal model is described, as is the determination of distribution factors by application of a Monte Carlo analysis. The results show that when fewer than 10,000 packets are emitted, an unsatisfactory approximation for the distribution factors is obtained, but that 10,000 packets is sufficient.
Surface vacancies concentration of CeO2(1 1 1) using kinetic Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Mattiello, S.; Kolling, S.; Heiliger, C.
2016-01-01
Kinetic Monte Carlo simulations (kMC) are useful tools for the investigation of the dynamics of surface properties. Within this method we investigate the oxygen vacancy concentration of \\text{Ce}{{\\text{O}}2}(1 1 1) at ultra high vacuum conditions (UHV). In order to achieve first principles calculations the input for the simulations, i.e. energy barriers for the microscopic processes, we use density functional theory (DFT) results from literature. We investigate the possibility of ad- and desorption of oxygen on ceria as well as the diffusion of oxygen vacancies to and from the subsurface. In particular, we focus on the vacancy surface concentration as well as on the ratio of the number of subsurface vacancies to the number of vacancies at the surface. The comparison of our dynamically obtained results to the experimental findings leads to several issues. In conclusion, we can claim a substantial incompatibility of the experimental results and the dynamical calculation using DFT inputs.
A Monte Carlo Sensitivity Analysis of CF2 and CF Radical Densities in a c-C4F8 Plasma
NASA Technical Reports Server (NTRS)
Bose, Deepak; Rauf, Shahid; Hash, D. B.; Govindan, T. R.; Meyyappan, M.
2004-01-01
A Monte Carlo sensitivity analysis is used to build a plasma chemistry model for octacyclofluorobutane (c-C4F8) which is commonly used in dielectric etch. Experimental data are used both quantitatively and quantitatively to analyze the gas phase and gas surface reactions for neutral radical chemistry. The sensitivity data of the resulting model identifies a few critical gas phase and surface aided reactions that account for most of the uncertainty in the CF2 and CF radical densities. Electron impact dissociation of small radicals (CF2 and CF) and their surface recombination reactions are found to be the rate-limiting steps in the neutral radical chemistry. The relative rates for these electron impact dissociation and surface recombination reactions are also suggested. The resulting mechanism is able to explain the measurements of CF2 and CF densities available in the literature and also their hollow spatial density profiles.
Full Monte-Carlo description of the Moscow State University Extensive Air Shower experiment
NASA Astrophysics Data System (ADS)
Fomin, Yu. A.; Kalmykov, N. N.; Karpikov, I. S.; Kulikov, G. V.; Kuznetsov, M. Yu.; Rubtsov, G. I.; Sulakov, V. P.; Troitsky, S. V.
2016-08-01
The Moscow State University Extensive Air Shower (EAS-MSU) array studied high-energy cosmic rays with primary energies ~ (1-500) PeV in the Northern hemisphere. The EAS-MSU data are being revisited following recently found indications to an excess of muonless showers, which may be interpreted as the first observation of cosmic gamma rays at ~ 100 PeV. In this paper, we present a complete Monte-Carlo model of the surface detector which results in a good agreement between data and simulations. The model allows us to study the performance of the detector and will be used to obtain physical results in further studies.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
Monte Carlo Transport for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
Advanced Computational Methods for Monte Carlo Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
NASA Astrophysics Data System (ADS)
Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin
2017-07-01
The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.
Patti, Alessandro; Cuetos, Alejandro
2012-07-01
We report on the diffusion of purely repulsive and freely rotating colloidal rods in the isotropic, nematic, and smectic liquid crystal phases to probe the agreement between Brownian and Monte Carlo dynamics under the most general conditions. By properly rescaling the Monte Carlo time step, being related to any elementary move via the corresponding self-diffusion coefficient, with the acceptance rate of simultaneous trial displacements and rotations, we demonstrate the existence of a unique Monte Carlo time scale that allows for a direct comparison between Monte Carlo and Brownian dynamics simulations. To estimate the validity of our theoretical approach, we compare the mean square displacement of rods, their orientational autocorrelation function, and the self-intermediate scattering function, as obtained from Brownian dynamics and Monte Carlo simulations. The agreement between the results of these two approaches, even under the condition of heterogeneous dynamics generally observed in liquid crystalline phases, is excellent.
NASA Astrophysics Data System (ADS)
Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.
2013-03-01
Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On the other hand, using a generic atlas head registered to the subject's head surface decreased the error by a factor of 2. When the information is available, using the true subject anatomy offers the best performance.
NASA Technical Reports Server (NTRS)
Gastellu-Etchegorry, Jean-Philippe; Yin, Tiangang; Lauret, Nicolas; Grau, Eloi; Rubio, Jeremy; Cook, Bruce D.; Morton, Douglas C.; Sun, Guoqing
2016-01-01
Light Detection And Ranging (LiDAR) provides unique data on the 3-D structure of atmosphere constituents and the Earth's surface. Simulating LiDAR returns for different laser technologies and Earth scenes is fundamental for evaluating and interpreting signal and noise in LiDAR data. Different types of models are capable of simulating LiDAR waveforms of Earth surfaces. Semi-empirical and geometric models can be imprecise because they rely on simplified simulations of Earth surfaces and light interaction mechanisms. On the other hand, Monte Carlo ray tracing (MCRT) models are potentially accurate but require long computational time. Here, we present a new LiDAR waveform simulation tool that is based on the introduction of a quasi-Monte Carlo ray tracing approach in the Discrete Anisotropic Radiative Transfer (DART) model. Two new approaches, the so-called "box method" and "Ray Carlo method", are implemented to provide robust and accurate simulations of LiDAR waveforms for any landscape, atmosphere and LiDAR sensor configuration (view direction, footprint size, pulse characteristics, etc.). The box method accelerates the selection of the scattering direction of a photon in the presence of scatterers with non-invertible phase function. The Ray Carlo method brings traditional ray-tracking into MCRT simulation, which makes computational time independent of LiDAR field of view (FOV) and reception solid angle. Both methods are fast enough for simulating multi-pulse acquisition. Sensitivity studies with various landscapes and atmosphere constituents are presented, and the simulated LiDAR signals compare favorably with their associated reflectance images and Laser Vegetation Imaging Sensor (LVIS) waveforms. The LiDAR module is fully integrated into DART, enabling more detailed simulations of LiDAR sensitivity to specific scene elements (e.g., atmospheric aerosols, leaf area, branches, or topography) and sensor configuration for airborne or satellite LiDAR sensors.
Real-time, ray casting-based scatter dose estimation for c-arm x-ray system.
Alnewaini, Zaid; Langer, Eric; Schaber, Philipp; David, Matthias; Kretz, Dominik; Steil, Volker; Hesser, Jürgen
2017-03-01
Dosimetric control of staff exposure during interventional procedures under fluoroscopy is of high relevance. In this paper, a novel ray casting approximation of radiation transport is presented and the potential and limitation vs. a full Monte Carlo transport and dose measurements are discussed. The x-ray source of a Siemens Axiom Artix C-arm is modeled by a virtual source model using single Gaussian-shaped source. A Geant4-based Monte Carlo simulation determines the radiation transport from the source to compute scatter from the patient, the table, the ceiling and the floor. A phase space around these scatterers stores all photon information. Only those photons are traced that hit a surface of phantom that represents medical staff in the treatment room, no indirect scattering is considered; and a complete dose deposition on the surface is calculated. To evaluate the accuracy of the approximation, both experimental measurements using Thermoluminescent dosimeters (TLDs) and a Geant4-based Monte Carlo simulation of dose depositing for different tube angulations of the C-arm from cranial-caudal angle 0° and from LAO (Left Anterior Oblique) 0°-90° are realized. Since the measurements were performed on both sides of the table, using the symmetry of the setup, RAO (Right Anterior Oblique) measurements were not necessary. The Geant4-Monte Carlo simulation agreed within 3% with the measured data, which is within the accuracy of measurement and simulation. The ray casting approximation has been compared to TLD measurements and the achieved percentage difference was -7% for data from tube angulations 45°-90° and -29% from tube angulations 0°-45° on the side of the x-ray source, whereas on the opposite side of the x-ray source, the difference was -83.8% and -75%, respectively. Ray casting approximation for only LAO 90° was compared to a Monte Carlo simulation, where the percentage differences were between 0.5-3% on the side of the x-ray source where the highest dose usually detected was mainly from primary scattering (photons), whereas percentage differences between 2.8-20% are found on the side opposite to the x-ray source, where the lowest doses were detected. Dose calculation time of our approach was 0.85 seconds. The proposed approach yields a fast scatter dose estimation where we could run the Monte Carlo simulation only once for each x-ray tube angulation to get the Phase Space Files (PSF) for being used later by our ray casting approach to calculate the dose from only photons which will hit an movable elliptical cylinder shaped phantom and getting an output file for the positions of those hits to be used for visualizing the scatter dose propagation on the phantom surface. With dose calculation times of less than one second, we are saving much time compared to using a Monte Carlo simulation instead. With our approach, larger deviations occur only in regions with very low doses, whereas it provides a high precision in high-dose regions. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Summarizing Monte Carlo Results in Methodological Research.
ERIC Educational Resources Information Center
Harwell, Michael R.
Monte Carlo studies of statistical tests are prominently featured in the methodological research literature. Unfortunately, the information from these studies does not appear to have significantly influenced methodological practice in educational and psychological research. One reason is that Monte Carlo studies lack an overarching theory to guide…
New Approaches and Applications for Monte Carlo Perturbation Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
NASA Astrophysics Data System (ADS)
Moglestue, C.; Buot, F. A.; Anderson, W. T.
1995-08-01
The lattice heating rate has been calculated for GaAs field-effect transistors of different source-drain channel design by means of the ensemble Monte Carlo particle model. Transport of carriers in the substrate and the presence of free surface charges are also included in our simulation. The actual heat generation was obtained by accounting for the energy exchanged with the lattice of the semiconductor during phonon scattering. It was found that the maximum heating rate takes place below the surface near the drain end of the gate. The results correlate well with a previous hydrodynamic energy transport estimate of the electronic energy density, but shifted slightly more towards the drain. These results further emphasize the adverse effects of hot electrons on the Ohmic contacts.
El-Jaby, Samy
2016-06-01
A recent paper published in Life Sciences in Space Research (El-Jaby and Richardson, 2015) presented estimates of the secondary neutron ambient and effective dose equivalent rates, in air, from surface altitudes up to suborbital altitudes and low Earth orbit. These estimates were based on MCNPX (LANL, 2011) (Monte Carlo N-Particle eXtended) radiation transport simulations of galactic cosmic radiation passing through Earth's atmosphere. During a recent review of the input decks used for these simulations, a systematic error was discovered that is addressed here. After reassessment, the neutron ambient and effective dose equivalent rates estimated are found to be 10 to 15% different, though, the essence of the conclusions drawn remains unchanged. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Dynamic Event Tree advancements and control logic improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego
The RAVEN code has been under development at the Idaho National Laboratory since 2012. Its main goal is to create a multi-purpose platform for the deploying of all the capabilities needed for Probabilistic Risk Assessment, uncertainty quantification, data mining analysis and optimization studies. RAVEN is currently equipped with three different sampling categories: Forward samplers (Monte Carlo, Latin Hyper Cube, Stratified, Grid Sampler, Factorials, etc.), Adaptive Samplers (Limit Surface search, Adaptive Polynomial Chaos, etc.) and Dynamic Event Tree (DET) samplers (Deterministic and Adaptive Dynamic Event Trees). The main subject of this document is to report the activities that have been donemore » in order to: start the migration of the RAVEN/RELAP-7 control logic system into MOOSE, and develop advanced dynamic sampling capabilities based on the Dynamic Event Tree approach. In order to provide to all MOOSE-based applications a control logic capability, in this Fiscal Year an initial migration activity has been initiated, moving the control logic system, designed for RELAP-7 by the RAVEN team, into the MOOSE framework. In this document, a brief explanation of what has been done is going to be reported. The second and most important subject of this report is about the development of a Dynamic Event Tree (DET) sampler named “Hybrid Dynamic Event Tree” (HDET) and its Adaptive variant “Adaptive Hybrid Dynamic Event Tree” (AHDET). As other authors have already reported, among the different types of uncertainties, it is possible to discern two principle types: aleatory and epistemic uncertainties. The classical Dynamic Event Tree is in charge of treating the first class (aleatory) uncertainties; the dependence of the probabilistic risk assessment and analysis on the epistemic uncertainties are treated by an initial Monte Carlo sampling (MCDET). From each Monte Carlo sample, a DET analysis is run (in total, N trees). The Monte Carlo employs a pre-sampling of the input space characterized by epistemic uncertainties. The consequent Dynamic Event Tree performs the exploration of the aleatory space. In the RAVEN code, a more general approach has been developed, not limiting the exploration of the epistemic space through a Monte Carlo method but using all the forward sampling strategies RAVEN currently employs. The user can combine a Latin Hyper Cube, Grid, Stratified and Monte Carlo sampling in order to explore the epistemic space, without any limitation. From this pre-sampling, the Dynamic Event Tree sampler starts its aleatory space exploration. As reported by the authors, the Dynamic Event Tree is a good fit to develop a goal-oriented sampling strategy. The DET is used to drive a Limit Surface search. The methodology that has been developed by the authors last year, performs a Limit Surface search in the aleatory space only. This report documents how this approach has been extended in order to consider the epistemic space interacting with the Hybrid Dynamic Event Tree methodology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J; Grigor, G
This study investigated dosimetric impact due to the bone backscatter in orthovoltage radiotherapy. Monte Carlo simulations were used to calculate depth doses and photon fluence spectra using the EGSnrc-based code. Inhomogeneous bone phantom containing a thin water layer (1–3 mm) on top of a bone (1 cm) to mimic the treatment sites of forehead, chest wall and kneecap was irradiated by the 220 kVp photon beam produced by the Gulmay D3225 x-ray machine. Percentage depth doses and photon energy spectra were determined using Monte Carlo simulations. Results of percentage depth doses showed that the maximum bone dose was about 210–230%more » larger than the surface dose in the phantoms with different water thicknesses. Surface dose was found to be increased from 2.3 to 3.5%, when the distance between the phantom surface and bone was increased from 1 to 3 mm. This increase of surface dose on top of a bone was due to the increase of photon fluence intensity, resulting from the bone backscatter in the energy range of 30 – 120 keV, when the water thickness was increased. This was also supported by the increase of the intensity of the photon energy spectral curves at the phantom and bone surface as the water thickness was increased. It is concluded that if the bone inhomogeneity during the dose prescription in the sites of forehead, chest wall and kneecap with soft tissue thickness = 1–3 mm is not considered, there would be an uncertainty in the dose delivery.« less
Nedea, S V; van Steenhoven, A A; Markvoort, A J; Spijker, P; Giordano, D
2014-05-01
The influence of gas-surface interactions of a dilute gas confined between two parallel walls on the heat flux predictions is investigated using a combined Monte Carlo (MC) and molecular dynamics (MD) approach. The accommodation coefficients are computed from the temperature of incident and reflected molecules in molecular dynamics and used as effective coefficients in Maxwell-like boundary conditions in Monte Carlo simulations. Hydrophobic and hydrophilic wall interactions are studied, and the effect of the gas-surface interaction potential on the heat flux and other characteristic parameters like density and temperature is shown. The heat flux dependence on the accommodation coefficient is shown for different fluid-wall mass ratios. We find that the accommodation coefficient is increasing considerably when the mass ratio is decreased. An effective map of the heat flux depending on the accommodation coefficient is given and we show that MC heat flux predictions using Maxwell boundary conditions based on the accommodation coefficient give good results when compared to pure molecular dynamics heat predictions. The accommodation coefficients computed for a dilute gas for different gas-wall interaction parameters and mass ratios are transferred to compute the heat flux predictions for a dense gas. Comparison of the heat fluxes derived using explicit MD, MC with Maxwell-like boundary conditions based on the accommodation coefficients, and pure Maxwell boundary conditions are discussed. A map of the heat flux dependence on the accommodation coefficients for a dense gas, and the effective accommodation coefficients for different gas-wall interactions are given. In the end, this approach is applied to study the gas-surface interactions of argon and xenon molecules on a platinum surface. The derived accommodation coefficients are compared with values of experimental results.
A Monte Carlo Simulation of Brownian Motion in the Freshman Laboratory
ERIC Educational Resources Information Center
Anger, C. D.; Prescott, J. R.
1970-01-01
Describes a dry- lab" experiment for the college freshman laboratory, in which the essential features of Browian motion are given principles, using the Monte Carlo technique. Calculations principles, using the Monte Carlo technique. Calculations are carried out by a computation sheme based on computer language. Bibliography. (LC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piao, J; PLA 302 Hospital, Beijing; Xu, S
2016-06-15
Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less
A study on the suitability of the PTW microDiamond detector for kilovoltage x-ray beam dosimetry.
Damodar, Joshita; Odgers, David; Pope, Dane; Hill, Robin
2018-05-01
Kilovoltage x-ray beams are widely used in treating skin cancers and in biological irradiators. In this work, we have evaluated four dosimeters (ionization chambers and solid state detectors) in their suitability for relative dosimetry of kilovoltage x-ray beams in the energy range of 50 - 280kVp. The solid state detectors, which have not been investigated with low energy x-rays, were the PTW 60019 microDiamond synthetic diamond detector and the PTW 60012 diode. The two ionization chambers used were the PTW Advanced Markus parallel plate chamber and the PTW PinPoint small volume chamber. For each of the dosimeters, percentage depth doses were measured in water over the full range of x-ray beams and for field sizes ranging from 2cm diameter to 12 × 12cm. In addition, depth doses were measured for a narrow aperture (7mm diameter) using the PTW microDiamond detector. For comparison, the measured data was compared with Monte Carlo calculated doses using the EGSnrc Monte Carlo package. The depth dose results indicate that the Advanced Markus parallel plate and PinPoint ionization chambers were suitable for depth dose measurements in the beam quality range with an uncertainty of less than 3%, including in the regions closer to the surface of the water as compared with Monte Carlo depth dose data for all six energy beams. The response of the PTW Diode E detector was accurate to within 4% for all field sizes in the energy range of 50-125kVp but showed larger variations for higher energies of up to 12% with the 12 × 12cm field size. In comparison, the microDiamond detector had good agreement over all energies for both smaller and larger field sizes generally within 1% as compared to the Advanced Markus chamber field and Monte Carlo calculations. The only exceptions were in measuring the dose at the surface of the water phantom where larger differences were found. For the 7mm diameter field, the agreement between the microDiamond detector and Monte Carlo calculations was good being better than 1% except at the surface. Based on these results, the PTW microDiamond detector has shown to be a suitable detector for relative dosimetry of low energy x-ray beams over a wide range of x-ray beam energies. Copyright © 2018 Elsevier Ltd. All rights reserved.
Surface excitations in electron spectroscopy. Part I: dielectric formalism and Monte Carlo algorithm
Salvat-Pujol, F; Werner, W S M
2013-01-01
The theory describing energy losses of charged non-relativistic projectiles crossing a planar interface is derived on the basis of the Maxwell equations, outlining the physical assumptions of the model in great detail. The employed approach is very general in that various common models for surface excitations (such as the specular reflection model) can be obtained by an appropriate choice of parameter values. The dynamics of charged projectiles near surfaces is examined by calculations of the induced surface charge and the depth- and direction-dependent differential inelastic inverse mean free path (DIIMFP) and stopping power. The effect of several simplifications frequently encountered in the literature is investigated: differences of up to 100% are found in heights, widths, and positions of peaks in the DIIMFP. The presented model is implemented in a Monte Carlo algorithm for the simulation of the electron transport relevant for surface electron spectroscopy. Simulated reflection electron energy loss spectra are in good agreement with experiment on an absolute scale. Copyright © 2012 John Wiley & Sons, Ltd. PMID:23794766
Spin-Ice Thin Films: Large-N Theory and Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Lantagne-Hurtubise, Étienne; Rau, Jeffrey G.; Gingras, Michel J. P.
2018-04-01
We explore the physics of highly frustrated magnets in confined geometries, focusing on the Coulomb phase of pyrochlore spin ices. As a specific example, we investigate thin films of nearest-neighbor spin ice, using a combination of analytic large-N techniques and Monte Carlo simulations. In the simplest film geometry, with surfaces perpendicular to the [001] crystallographic direction, we observe pinch points in the spin-spin correlations characteristic of a two-dimensional Coulomb phase. We then consider the consequences of crystal symmetry breaking on the surfaces of the film through the inclusion of orphan bonds. We find that when these bonds are ferromagnetic, the Coulomb phase is destroyed by the presence of fluctuating surface magnetic charges, leading to a classical Z2 spin liquid. Building on this understanding, we discuss other film geometries with surfaces perpendicular to the [110] or the [111] direction. We generically predict the appearance of surface magnetic charges and discuss their implications for the physics of such films, including the possibility of an unusual Z3 classical spin liquid. Finally, we comment on open questions and promising avenues for future research.
Surface Segregation in Cu-Ni Alloys
NASA Technical Reports Server (NTRS)
Good, Brian; Bozzolo, Guillermo; Ferrante, John
1993-01-01
Monte Carlo simulation is used to calculate the composition profiles of surface segregation of Cu-Ni alloys. The method of Bozzolo, Ferrante, and Smith is used to compute the energetics of these systems as a function of temperature, crystal face, and bulk concentration. The predictions are compared with other theoretical and experimental results.
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
Path integral Monte Carlo and the electron gas
NASA Astrophysics Data System (ADS)
Brown, Ethan W.
Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.
NASA Astrophysics Data System (ADS)
Kazantsev, D. M.; Akhundov, I. O.; Shwartz, N. L.; Alperovich, V. L.; Latyshev, A. V.
2015-12-01
Ostwald ripening and step-terraced morphology formation on the GaAs(0 0 1) surface during annealing in equilibrium conditions are investigated experimentally and by Monte Carlo simulation. Fourier and autocorrelation analyses are used to reveal surface relief anisotropy and provide information about islands and pits shape and their size distribution. Two origins of surface anisotropy are revealed. At the initial stage of surface smoothing, crystallographic anisotropy is observed, which is caused presumably by the anisotropy of surface diffusion at GaAs(0 0 1). A difference of diffusion activation energies along [1 1 0] and [1 1 bar 0] axes of the (0 0 1) face is estimated as ΔEd ≈ 0.1 eV from the comparison of experimental results and simulation. At later stages of surface smoothing the anisotropy of the surface relief is determined by the vicinal steps direction. At the initial stage of step-terraced morphology formation the kinetics of monatomic islands and pits growth agrees with the Ostwald ripening theory. At the final stage the size of islands and pits decreases due to their incorporation into the forming vicinal steps.
Linear and Non-Linear Dielectric Response of Periodic Systems from Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Umari, Paolo
2006-03-01
We present a novel approach that allows to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wavefunction, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence. The polarization is sampled through forward-walking. This approach has been validated for the case of the polarizability of an isolated hydrogen atom, and then applied to a periodic system. We then calculate the linear susceptibility and second-order hyper-susceptibility of molecular-hydrogen chains whith different bond-length alternations, and assess the quality of nodal surfaces derived from density-functional theory or from Hartree-Fock. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.P. Umari, A.J. Williamson, G. Galli, and N. MarzariPhys. Rev. Lett. 95, 207602 (2005).
Common radiation analysis model for 75,000 pound thrust NERVA engine (1137400E)
NASA Technical Reports Server (NTRS)
Warman, E. A.; Lindsey, B. A.
1972-01-01
The mathematical model and sources of radiation used for the radiation analysis and shielding activities in support of the design of the 1137400E version of the 75,000 lbs thrust NERVA engine are presented. The nuclear subsystem (NSS) and non-nuclear components are discussed. The geometrical model for the NSS is two dimensional as required for the DOT discrete ordinates computer code or for an azimuthally symetrical three dimensional Point Kernel or Monte Carlo code. The geometrical model for the non-nuclear components is three dimensional in the FASTER geometry format. This geometry routine is inherent in the ANSC versions of the QAD and GGG Point Kernal programs and the COHORT Monte Carlo program. Data are included pertaining to a pressure vessel surface radiation source data tape which has been used as the basis for starting ANSC analyses with the DASH code to bridge into the COHORT Monte Carlo code using the WANL supplied DOT angular flux leakage data. In addition to the model descriptions and sources of radiation, the methods of analyses are briefly described.
Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.
2014-01-01
Abstract. Near-infrared spectroscopy (NIRS) estimations of the adult brain baseline optical properties based on a homogeneous model of the head are known to introduce significant contamination from extracerebral layers. More complex models have been proposed and occasionally applied to in vivo data, but their performances have never been characterized on realistic head structures. Here we implement a flexible fitting routine of time-domain NIRS data using graphics processing unit based Monte Carlo simulations. We compare the results for two different geometries: a two-layer slab with variable thickness of the first layer and a template atlas head registered to the subject’s head surface. We characterize the performance of the Monte Carlo approaches for fitting the optical properties from simulated time-resolved data of the adult head. We show that both geometries provide better results than the commonly used homogeneous model, and we quantify the improvement in terms of accuracy, linearity, and cross-talk from extracerebral layers. PMID:24407503
A Primer in Monte Carlo Integration Using Mathcad
ERIC Educational Resources Information Center
Hoyer, Chad E.; Kegerreis, Jeb S.
2013-01-01
The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…
An unbiased Hessian representation for Monte Carlo PDFs.
Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.
Numerical integration of detector response functions via Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
NOTE: Monte Carlo evaluation of kerma in an HDR brachytherapy bunker
NASA Astrophysics Data System (ADS)
Pérez-Calatayud, J.; Granero, D.; Ballester, F.; Casal, E.; Crispin, V.; Puchades, V.; León, A.; Verdú, G.
2004-12-01
In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater.
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
Monte Carlo simulations in X-ray imaging
NASA Astrophysics Data System (ADS)
Giersch, Jürgen; Durst, Jürgen
2008-06-01
Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
Dosimetry of gamma chamber blood irradiator using PAGAT gel dosimeter and Monte Carlo simulations
Mohammadyari, Parvin; Zehtabian, Mehdi; Sina, Sedigheh; Tavasoli, Ali Reza
2014-01-01
Currently, the use of blood irradiation for inactivating pathogenic microbes in infected blood products and preventing graft‐versus‐host disease (GVHD) in immune suppressed patients is greater than ever before. In these systems, dose distribution and uniformity are two important concepts that should be checked. In this study, dosimetry of the gamma chamber blood irradiator model Gammacell 3000 Elan was performed by several dosimeter methods including thermoluminescence dosimeters (TLD), PAGAT gel dosimetry, and Monte Carlo simulations using MCNP4C code. The gel dosimeter was put inside a glass phantom and the TL dosimeters were placed on its surface, and the phantom was then irradiated for 5 min and 27 sec. The dose values at each point inside the vials were obtained from the magnetic resonance imaging of the phantom. For Monte Carlo simulations, all components of the irradiator were simulated and the dose values in a fine cubical lattice were calculated using tally F6. This study shows that PAGAT gel dosimetry results are in close agreement with the results of TL dosimetry, Monte Carlo simulations, and the results given by the vendor, and the percentage difference between the different methods is less than 4% at different points inside the phantom. According to the results obtained in this study, PAGAT gel dosimetry is a reliable method for dosimetry of the blood irradiator. The major advantage of this kind of dosimetry is that it is capable of 3D dose calculation. PACS number: 87.53.Bn PMID:24423829
Initial Assessment of a Rapid Method of Calculating CEV Environmental Heating
NASA Technical Reports Server (NTRS)
Pickney, John T.; Milliken, Andrew H.
2010-01-01
An innovative method for rapidly calculating spacecraft environmental absorbed heats in planetary orbit is described. The method employs reading a database of pre-calculated orbital absorbed heats and adjusting those heats for desired orbit parameters. The approach differs from traditional Monte Carlo methods that are orbit based with a planet centered coordinate system. The database is based on a spacecraft centered coordinated system where the range of all possible sun and planet look angles are evaluated. In an example case 37,044 orbit configurations were analyzed for average orbital heats on selected spacecraft surfaces. Calculation time was under 2 minutes while a comparable Monte Carlo evaluation would have taken an estimated 26 hours
Atomic Oxygen Energy in Low Frequency Hyperthermal Plasma Ashers
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Miller, Sharon K R.; Kneubel, Christian A.
2014-01-01
Experimental and analytical analysis of the atomic oxygen erosion of pyrolytic graphite as well as Monte Carlo computational modeling of the erosion of Kapton H (DuPont, Wilmington, DE) polyimide was performed to determine the hyperthermal energy of low frequency (30 to 35 kHz) plasma ashers operating on air. It was concluded that hyperthermal energies in the range of 0.3 to 0.9 eV are produced in the low frequency air plasmas which results in texturing similar to that in low Earth orbit (LEO). Monte Carlo computational modeling also indicated that such low energy directed ions are fully capable of producing the experimentally observed textured surfaces in low frequency plasmas.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
Monte Carlo simulation of Hamaker nanospheres coated with dipolar particles
NASA Astrophysics Data System (ADS)
Meyra, Ariel G.; Zarragoicoechea, Guillermo J.; Kuz, Victor A.
2012-01-01
Parallel tempering Monte Carlo simulation is carried out in systems of N attractive Hamaker spheres dressed with n dipolar particles, able to move on the surface of the spheres. Different cluster configurations emerge for given values of the control parameters. Energy per sphere, pair distribution functions of spheres and dipoles as function of temperature, density, external electric field, and/or the angular orientation of dipoles are used to analyse the state of aggregation of the system. As a consequence of the non-central interaction, the model predicts complex structures like self-assembly of spheres by a double crown of dipoles. This interesting result could be of help in understanding some recent experiments in colloidal science and biology.
Discrete Diffusion Monte Carlo for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
Cell-veto Monte Carlo algorithm for long-range systems.
Kapfer, Sebastian C; Krauth, Werner
2016-09-01
We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.
Nuclide Depletion Capabilities in the Shift Monte Carlo Code
Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...
2017-12-21
A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.
Ground state of excitonic molecules by the Green's-function Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M.A.; Vashishta, P.; Kalia, R.K.
1983-12-26
The ground-state energy of excitonic molecules is evaluated as a function of the ratio of electron and hole masses, sigma, with use of the Green's-function Monte Carlo method. For all sigma, the Green's-function Monte Carlo energies are significantly lower than the variational estimates and in favorable agreement with experiments. In excitonic rydbergs, the binding energy of the positronium molecule (sigma = 1) is predicted to be -0.06 and for sigma<<1, the Green's-function Monte Carlo energies agree with the ''exact'' limiting behavior, E = -2.346+0.764sigma.
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.
NASA Astrophysics Data System (ADS)
Fubiani, G.; Boeuf, J. P.
2013-11-01
Results from a 3D self-consistent Particle-In-Cell Monte Carlo Collisions (PIC MCC) model of a high power fusion-type negative ion source are presented for the first time. The model is used to calculate the plasma characteristics of the ITER prototype BATMAN ion source developed in Garching. Special emphasis is put on the production of negative ions on the plasma grid surface. The question of the relative roles of the impact of neutral hydrogen atoms and positive ions on the cesiated grid surface has attracted much attention recently and the 3D PIC MCC model is used to address this question. The results show that the production of negative ions by positive ion impact on the plasma grid is small with respect to the production by atomic hydrogen or deuterium bombardment (less than 10%).
Monte Carlo Study of the Diffusion of CO Molecules inside Anthraquinone Hexagons on Cu(111)
NASA Astrophysics Data System (ADS)
Kim, Kwangmoo; Einstein, T. L.; Wyrick, Jon; Bartels, Ludwig
2010-03-01
Using Monte Carlo calculations of the two-di-men-sion-al (2D) lattice gas model, we study the diffusion of CO molecules inside anthraquinone (AQ) hexagons on a Cu(111) plane. We use experimentally-derived CO-CO interactionsfootnotetextK.L. Wong, , L. Bartels, J. Chem.Phys.123, 201102 (2005) and the analytic expression for the long-range surface-state- mediated interactionsfootnotetextK. Berland, TLE, and P. Hyldgaard, Phys.Rev. B 80, 155431 (2009) to describe the CO-AQ interactions. We assume that the CO-CO interactions are not affected by the presence of AQ's and that the CO-AQ interactions can be controlled by varying the intra-surface-state (ISS) reflectance r and the ISS phase shift δ of the indirect-electronic adsorbate-pair interactions. Comparing our results with experimental observations, we find that not only pair but also surface-state-mediated trio interactionsfootnotetextP. Hyldgaard and T.L. Einstein, EPL 59, 265 (2002) are needed to understand the data.
On the Connection between Kinetic Monte Carlo and the Burton-Cabrera-Frank Theory
NASA Astrophysics Data System (ADS)
Patrone, Paul; Margetis, Dionisios; Einstein, T. L.
2013-03-01
In the many years since it was first proposed, the Burton- Cabrera-Frank (BCF) model of step-flow has been experimentally established as one of the cornerstones of surface physics. However, many questions remain regarding the underlying physical processes and theoretical assumptions that give rise to the BCF theory. In this work, we formally derive the BCF theory from an atomistic, kinetic Monte Carlo model of the surface in 1 +1 dimensions with one step. Our analysis (i) shows how the BCF theory describes a surface with a low density of adsorbed atoms, and (ii) establishes a set of near-equilibrium conditions ensuring that the theory remains valid for all times. Support for PP was provided by the NIST-ARRA Fellowship Award No. 70NANB10H026 through UMD. Support for TLE and PP was also provided by the CMTC at UMD, with ancillary support from the UMD MRSEC. Support for DM was provided by NSF DMS0847587 at UMD.
Unraveling the oxygen vacancy structures at the reduced Ce O2(111 ) surface
NASA Astrophysics Data System (ADS)
Han, Zhong-Kang; Yang, Yi-Zhou; Zhu, Beien; Ganduglia-Pirovano, M. Verónica; Gao, Yi
2018-03-01
Oxygen vacancies at ceria (Ce O2 ) surfaces play an essential role in catalytic applications. However, during the past decade, the near-surface vacancy structures at Ce O2(111 ) have been questioned due to the contradictory results from experiments and theoretical simulations. Whether surface vacancies agglomerate, and which is the most stable vacancy structure for varying vacancy concentration and temperature, are being heatedly debated. By combining density functional theory calculations and Monte Carlo simulations, we proposed a unified model to explain all conflicting experimental observations and theoretical results. We find a novel trimeric vacancy structure which is more stable than any other one previously reported, which perfectly reproduces the characteristics of the double linear surface oxygen vacancy clusters observed by STM. Monte Carlo simulations show that at low temperature and low vacancy concentrations, vacancies prefer subsurface sites with a local (2 × 2) ordering, whereas mostly linear surface vacancy clusters do form with increased temperature and degree of reduction. These results well explain the disputes about the stable vacancy structure and surface vacancy clustering at Ce O2(111 ) , and provide a foundation for the understanding of the redox and catalytic chemistry of metal oxides.
Stochastic modeling of macrodispersion in unsaturated heterogeneous porous media. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, T.C.J.
1995-02-01
Spatial heterogeneity of geologic media leads to uncertainty in predicting both flow and transport in the vadose zone. In this work an efficient and flexible, combined analytical-numerical Monte Carlo approach is developed for the analysis of steady-state flow and transient transport processes in highly heterogeneous, variably saturated porous media. The approach is also used for the investigation of the validity of linear, first order analytical stochastic models. With the Monte Carlo analysis accurate estimates of the ensemble conductivity, head, velocity, and concentration mean and covariance are obtained; the statistical moments describing displacement of solute plumes, solute breakthrough at a compliancemore » surface, and time of first exceedance of a given solute flux level are analyzed; and the cumulative probability density functions for solute flux across a compliance surface are investigated. The results of the Monte Carlo analysis show that for very heterogeneous flow fields, and particularly in anisotropic soils, the linearized, analytical predictions of soil water tension and soil moisture flux become erroneous. Analytical, linearized Lagrangian transport models also overestimate both the longitudinal and the transverse spreading of the mean solute plume in very heterogeneous soils and in dry soils. A combined analytical-numerical conditional simulation algorithm is also developed to estimate the impact of in-situ soil hydraulic measurements on reducing the uncertainty of concentration and solute flux predictions.« less
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
On the modeling of the 2010 Gulf of Mexico Oil Spill
NASA Astrophysics Data System (ADS)
Mariano, A. J.; Kourafalou, V. H.; Srinivasan, A.; Kang, H.; Halliwell, G. R.; Ryan, E. H.; Roffer, M.
2011-09-01
Two oil particle trajectory forecasting systems were developed and applied to the 2010 Deepwater Horizon Oil Spill in the Gulf of Mexico. Both systems use ocean current fields from high-resolution numerical ocean circulation model simulations, Lagrangian stochastic models to represent unresolved sub-grid scale variability to advect oil particles, and Monte Carlo-based schemes for representing uncertain biochemical and physical processes. The first system assumes two-dimensional particle motion at the ocean surface, the oil is in one state, and the particle removal is modeled as a Monte Carlo process parameterized by a one number removal rate. Oil particles are seeded using both initial conditions based on observations and particles released at the location of the Maconda well. The initial conditions (ICs) of oil particle location for the two-dimensional surface oil trajectory forecasts are based on a fusing of all available information including satellite-based analyses. The resulting oil map is digitized into a shape file within which a polygon filling software generates longitude and latitude with variable particle density depending on the amount of oil present in the observations for the IC. The more complex system assumes three (light, medium, heavy) states for the oil, each state has a different removal rate in the Monte Carlo process, three-dimensional particle motion, and a particle size-dependent oil mixing model. Simulations from the two-dimensional forecast system produced results that qualitatively agreed with the uncertain "truth" fields. These simulations validated the use of our Monte Carlo scheme for representing oil removal by evaporation and other weathering processes. Eulerian velocity fields for predicting particle motion from data-assimilative models produced better particle trajectory distributions than a free running model with no data assimilation. Monte Carlo simulations of the three-dimensional oil particle trajectory, whose ensembles were generated by perturbing the size of the oil particles and the fraction in a given size range that are released at depth, the two largest unknowns in this problem. 36 realizations of the model were run with only subsurface oil releases. An average of these results yields that after three months, about 25% of the oil remains in the water column and that most of the oil is below 800 m.
Molecular dynamics and dynamic Monte-Carlo simulation of irradiation damage with focused ion beams
NASA Astrophysics Data System (ADS)
Ohya, Kaoru
2017-03-01
The focused ion beam (FIB) has become an important tool for micro- and nanostructuring of samples such as milling, deposition and imaging. However, this leads to damage of the surface on the nanometer scale from implanted projectile ions and recoiled material atoms. It is therefore important to investigate each kind of damage quantitatively. We present a dynamic Monte-Carlo (MC) simulation code to simulate the morphological and compositional changes of a multilayered sample under ion irradiation and a molecular dynamics (MD) simulation code to simulate dose-dependent changes in the backscattering-ion (BSI)/secondary-electron (SE) yields of a crystalline sample. Recent progress in the codes for research to simulate the surface morphology and Mo/Si layers intermixing in an EUV lithography mask irradiated with FIBs, and the crystalline orientation effect on BSI and SE yields relating to the channeling contrast in scanning ion microscopes, is also presented.
On predicting contamination levels of HALOE optics aboard UARS using direct simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Rault, Didier F. G.
1993-01-01
A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flowfield and surface conditions and geometric orientations in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. Problems resolving species outgassing and vent flux rates that varied over many orders of magnitude were handled using species weighting factors. Results relating to contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface are presented, along with data related to code performance. Using procedures developed in standard contamination analyses, the cumulative level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated to be about 2700A.
NASA Astrophysics Data System (ADS)
Tobochnik, Jan; Chapin, Phillip M.
1988-05-01
Monte Carlo simulations were performed for hard disks on the surface of an ordinary sphere and hard spheres on the surface of a four-dimensional hypersphere. Starting from the low density fluid the density was increased to obtain metastable amorphous states at densities higher than previously achieved. Above the freezing density the inverse pressure decreases linearly with density, reaching zero at packing fractions equal to 68% for hard spheres and 84% for hard disks. Using these new estimates for random closest packing and coefficients from the virial series we obtain an equation of state which fits all the data up to random closest packing. Usually, the radial distribution function showed the typical split second peak characteristic of amorphous solids and glasses. High density systems which lacked this split second peak and showed other sharp peaks were interpreted as signaling the onset of crystal nucleation.
Monte Carlo study of the hetero-polytypical growth of cubic on hexagonal silicon carbide polytypes
NASA Astrophysics Data System (ADS)
Camarda, Massimo
2012-08-01
In this article we use three dimensional kinetic Monte Carlo simulations on super-lattices to study the hetero-polytypical growth of cubic silicon carbide polytype (3C-SiC) on misoriented hexagonal (4H and 6H) substrates. We analyze the quality of the 3C-SiC film varying the polytype, the miscut angle and the initial surface morphology of the substrate. We find that the use of 6H misoriented (4°-10° off) substrates, with step bunched surfaces, can strongly improve the quality of the cubic epitaxial film whereas the 3C/4H growth is affected by the generation of dislocations, due to the incommensurable periodicity of the 3C (3) and the 4H (4) polytypes. For these reasons, a proper pre-growth treatment of 6H misoriented substrates can be the key for the growth of high quality, twin free, 3C-SiC films.
NASA Astrophysics Data System (ADS)
Huang, Yanping; Dong, Xiuqin; Yu, Yingzhe; Zhang, Minhua
2017-11-01
On the basis of the activation barriers and reaction energies from DFT calculations, kinetic Monte Carlo (kMC) simulations of vinyl acetate (VA) synthesis from ethylene acetoxylation on Pd(100) and Pd/Au(100) were carried out. Through kMC simulation, it was found that VA synthesis from ethylene acetoxylation proceeds via Moiseev mechanism on both Pd(100) and Pd/Au(100). The addition of Au into Pd can suppress ethylene dehydrogenation while it can promote acetic acid dehydrogenation, which can eventually facilitate VA synthesis as a whole. The addition of Au into Pd can further improve the conversion and selectivity of VA synthesis from ethylene acetoxylation. When the reaction network is analyzed, besides the energetics of each elementary reaction, the surface coverage of each species and the occupancy of the surface sites on the catalyst should also be taken into consideration.
Monte Carlo simulation of a near-continuum shock-shock interaction problem
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Wilmoth, Richard G.
1992-01-01
A complex shock interaction is calculated with direct simulation Monte Carlo (DSMC). The calculation is performed for the near-continuum flow produced when an incident shock impinges on the bow shock of a 0.1 in. radius cowl lip for freestream conditions of approximately Mach 15 and 35 km altitude. Solutions are presented both for a full finite-rate chemistry calculation and for a case with chemical reactions suppressed. In each case, both the undisturbed flow about the cowl lip and the full shock interaction flowfields are calculated. Good agreement has been obtained between the no-chemistry simulation of the undisturbed flow and a perfect gas solution obtained with the viscous shock-layer method. Large differences in calculated surface properties when different chemical models are used demonstrate the necessity of adequately representing the chemistry when making surface property predictions. Preliminary grid refinement studies make it possible to estimate the accuracy of the solutions.
Preliminary Dynamic Feasibility and Analysis of a Spherical, Wind-Driven (Tumbleweed), Martian Rover
NASA Technical Reports Server (NTRS)
Flick, John J.; Toniolo, Matthew D.
2005-01-01
The process and findings are presented from a preliminary feasibility study examining the dynamics characteristics of a spherical wind-driven (or Tumbleweed) rover, which is intended for exploration of the Martian surface. The results of an initial feasibility study involving several worst-case mobility situations that a Tumbleweed rover might encounter on the surface of Mars are discussed. Additional topics include the evaluation of several commercially available analysis software packages that were examined as possible platforms for the development of a Monte Carlo Tumbleweed mission simulation tool. This evaluation lead to the development of the Mars Tumbleweed Monte Carlo Simulator (or Tumbleweed Simulator) using the Vortex physics software package from CM-Labs, Inc. Discussions regarding the development and evaluation of the Tumbleweed Simulator, as well as the results of a preliminary analysis using the tool are also presented. Finally, a brief conclusions section is presented.
ERIC Educational Resources Information Center
Mao, Xiuzhen; Xin, Tao
2013-01-01
The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…
Modifying the Monte Carlo Quiz to Increase Student Motivation, Participation, and Content Retention
ERIC Educational Resources Information Center
Simonson, Shawn R.
2017-01-01
Fernald developed the Monte Carlo Quiz format to enhance retention, encourage students to prepare for class, read with intention, and organize information in psychology classes. This author modified the Monte Carlo Quiz, combined it with the Minute Paper, and applied it to various courses. Students write quiz questions as part of the Minute Paper…
The Monte Carlo Method. Popular Lectures in Mathematics.
ERIC Educational Resources Information Center
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, L; Yang, F
2015-06-15
Purpose: The application of optically stimulated luminescence dosimeters (OSLDs) may be extended to clinical investigations verifying irradiated doses in small animal models. In proton beams, the accurate positioning of the Bragg peak is essential for tumor targeting. The purpose of this study was to estimate the displacement of a pristine Bragg peak when an Al2O3:C nanodot (Landauer, Inc.) is placed on the surface of a water phantom and to evaluate corresponding changes in dose. Methods: Clinical proton pencil beam simulations were carried out with using TOPAS, a Monte Carlo platform layered on top of GEANT4. Point-shaped beams with no energymore » spread were modeled for energies 100MV, 150MV, 200MV, and 250MV. Dose scoring for 100,000 particle histories was conducted within a water phantom (20cm × 20cm irradiated area, 40cm depth) with its surface placed 214.5cm away from the source. The modeled nanodot had a 4mm radius and 0.2mm thickness. Results: A comparative analysis of Monte Carlo depth dose profiles modeled for these proton pencil beams did not demonstrate an energy dependent in the Bragg peak shift. The shifts in Bragg Peak depth for water phantoms modeled with a nanodot on the phantom surface ranged between 2.7 to 3.2 mm. In all cases, the Bragg Peaks were shifted closer to the irradiation source. The peak dose in phantoms with an OSLD remained unchanged with percent dose differences less than 0.55% when compared to phantom doses without the nanodot. Conclusion: Monte Carlo calculations show that the presence of OSLD nanodots in proton beam therapy will not change the position of a pristine Bragg Peak by more than 3 mm. Although the 3.0 mm shift will not have a detrimental effect in patients receiving proton therapy, this effect may not be negligible in dose verification measurements for mouse models at lower proton beam energies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna
2014-09-15
Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90) agreeing within ±3%, on average. Conclusions: Several aspects of OLINDA/EXM dose calculation were compared with patient-specific dose estimates obtained using Monte Carlo. Differences in patient anatomy led to large differences in cross-organ doses. However, total organ doses were still in good agreement since most of the deposited dose is due to self-irradiation. Comparison of voxelized doses calculated by Monte Carlo and the voxel S value technique showed that the 3D dose distributions produced by the respective methods are nearly identical.« less
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
NASA Astrophysics Data System (ADS)
Prabhu Verleker, Akshay; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M.
2015-03-01
The purpose of this study is to develop an alternate empirical approach to estimate near-infra-red (NIR) photon propagation and quantify optically induced drug release in brain metastasis, without relying on computationally expensive Monte Carlo techniques (gold standard). Targeted drug delivery with optically induced drug release is a noninvasive means to treat cancers and metastasis. This study is part of a larger project to treat brain metastasis by delivering lapatinib-drug-nanocomplexes and activating NIR-induced drug release. The empirical model was developed using a weighted approach to estimate photon scattering in tissues and calibrated using a GPU based 3D Monte Carlo. The empirical model was developed and tested against Monte Carlo in optical brain phantoms for pencil beams (width 1mm) and broad beams (width 10mm). The empirical algorithm was tested against the Monte Carlo for different albedos along with diffusion equation and in simulated brain phantoms resembling white-matter (μs'=8.25mm-1, μa=0.005mm-1) and gray-matter (μs'=2.45mm-1, μa=0.035mm-1) at wavelength 800nm. The goodness of fit between the two models was determined using coefficient of determination (R-squared analysis). Preliminary results show the Empirical algorithm matches Monte Carlo simulated fluence over a wide range of albedo (0.7 to 0.99), while the diffusion equation fails for lower albedo. The photon fluence generated by empirical code matched the Monte Carlo in homogeneous phantoms (R2=0.99). While GPU based Monte Carlo achieved 300X acceleration compared to earlier CPU based models, the empirical code is 700X faster than the Monte Carlo for a typical super-Gaussian laser beam.
Shirey, Robert J; Wu, Hsinshun Terry
2018-01-01
This study quantifies the dosimetric accuracy of a commercial treatment planning system as functions of treatment depth, air gap, and range shifter thickness for superficial pencil beam scanning proton therapy treatments. The RayStation 6 pencil beam and Monte Carlo dose engines were each used to calculate the dose distributions for a single treatment plan with varying range shifter air gaps. Central axis dose values extracted from each of the calculated plans were compared to dose values measured with a calibrated PTW Markus chamber at various depths in RW3 solid water. Dose was measured at 12 depths, ranging from the surface to 5 cm, for each of the 18 different air gaps, which ranged from 0.5 to 28 cm. TPS dosimetric accuracy, defined as the ratio of calculated dose relative to the measured dose, was plotted as functions of depth and air gap for the pencil beam and Monte Carlo dose algorithms. The accuracy of the TPS pencil beam dose algorithm was found to be clinically unacceptable at depths shallower than 3 cm with air gaps wider than 10 cm, and increased range shifter thickness only added to the dosimetric inaccuracy of the pencil beam algorithm. Each configuration calculated with Monte Carlo was determined to be clinically acceptable. Further comparisons of the Monte Carlo dose algorithm to the measured spread-out Bragg Peaks of multiple fields used during machine commissioning verified the dosimetric accuracy of Monte Carlo in a variety of beam energies and field sizes. Discrepancies between measured and TPS calculated dose values can mainly be attributed to the ability (or lack thereof) of the TPS pencil beam dose algorithm to properly model secondary proton scatter generated in the range shifter. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
An Overview of the NCC Spray/Monte-Carlo-PDF Computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, Nan-Suey (Technical Monitor)
2000-01-01
This paper advances the state-of-the-art in spray computations with some of our recent contributions involving scalar Monte Carlo PDF (Probability Density Function), unstructured grids and parallel computing. It provides a complete overview of the scalar Monte Carlo PDF and Lagrangian spray computer codes developed for application with unstructured grids and parallel computing. Detailed comparisons for the case of a reacting non-swirling spray clearly highlight the important role that chemistry/turbulence interactions play in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that some of the combustion occurs in a predominantly premixed-flame environment and the rest in a predominantly diffusion-flame environment. However, the non-PDF solution predicts wrongly for the combustion to occur in a vaporization-controlled regime. Near the premixed flame, the Monte Carlo particle temperature distribution shows two distinct peaks: one centered around the flame temperature and the other around the surrounding-gas temperature. Near the diffusion flame, the Monte Carlo particle temperature distribution shows a single peak. In both cases, the computed PDF's shape and strength are found to vary substantially depending upon the proximity to the flame surface. The results bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations. Finally, we end the paper by demonstrating the computational viability of the present solution procedure for its use in 3D combustor calculations by summarizing the results of a 3D test case with periodic boundary conditions. For the 3D case, the parallel performance of all the three solvers (CFD, PDF, and spray) has been found to be good when the computations were performed on a 24-processor SGI Origin work-station.
Use of Fluka to Create Dose Calculations
NASA Technical Reports Server (NTRS)
Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John
2012-01-01
Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy
NASA Astrophysics Data System (ADS)
Sharma, Sanjib
2017-08-01
Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
NASA Astrophysics Data System (ADS)
Kim, Jeongnim; Baczewski, Andrew D.; Beaudet, Todd D.; Benali, Anouar; Chandler Bennett, M.; Berrill, Mark A.; Blunt, Nick S.; Josué Landinez Borda, Edgar; Casula, Michele; Ceperley, David M.; Chiesa, Simone; Clark, Bryan K.; Clay, Raymond C., III; Delaney, Kris T.; Dewing, Mark; Esler, Kenneth P.; Hao, Hongxia; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M. Graham; Luo, Ye; Malone, Fionn D.; Martin, Richard M.; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A.; Mitas, Lubos; Morales, Miguel A.; Neuscamman, Eric; Parker, William D.; Pineda Flores, Sergio D.; Romero, Nichols A.; Rubenstein, Brenda M.; Shea, Jacqueline A. R.; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F.; Townsend, Joshua P.; Tubman, Norm M.; Van Der Goetz, Brett; Vincent, Jordan E.; ChangMo Yang, D.; Yang, Yubo; Zhang, Shuai; Zhao, Luning
2018-05-01
QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater–Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.
Kim, Jeongnim; Baczewski, Andrew T; Beaudet, Todd D; Benali, Anouar; Bennett, M Chandler; Berrill, Mark A; Blunt, Nick S; Borda, Edgar Josué Landinez; Casula, Michele; Ceperley, David M; Chiesa, Simone; Clark, Bryan K; Clay, Raymond C; Delaney, Kris T; Dewing, Mark; Esler, Kenneth P; Hao, Hongxia; Heinonen, Olle; Kent, Paul R C; Krogel, Jaron T; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M Graham; Luo, Ye; Malone, Fionn D; Martin, Richard M; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A; Mitas, Lubos; Morales, Miguel A; Neuscamman, Eric; Parker, William D; Pineda Flores, Sergio D; Romero, Nichols A; Rubenstein, Brenda M; Shea, Jacqueline A R; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F; Townsend, Joshua P; Tubman, Norm M; Van Der Goetz, Brett; Vincent, Jordan E; Yang, D ChangMo; Yang, Yubo; Zhang, Shuai; Zhao, Luning
2018-05-16
QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program's capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.
Influence of GaAs substrate properties on the congruent evaporation temperature
NASA Astrophysics Data System (ADS)
Spirina, A. A.; Nastovjak, A. G.; Shwartz, N. L.
2018-03-01
High-temperature annealing of GaAs(111)A and GaAs(111)B substrates under Langmuir evaporation conditions was studied using Monte Carlo simulation. The maximal value of the congruent evaporation temperature was estimated. The congruent evaporation temperature was demonstrated to be dependent on the surface orientation and concentration of surface defects.
Viel, Alexandra; Coutinho-Neto, Maurício D; Manthe, Uwe
2007-01-14
Quantum dynamics calculations of the ground state tunneling splitting and of the zero point energy of malonaldehyde on the full dimensional potential energy surface proposed by Yagi et al. [J. Chem. Phys. 1154, 10647 (2001)] are reported. The exact diffusion Monte Carlo and the projection operator imaginary time spectral evolution methods are used to compute accurate benchmark results for this 21-dimensional ab initio potential energy surface. A tunneling splitting of 25.7+/-0.3 cm-1 is obtained, and the vibrational ground state energy is found to be 15 122+/-4 cm-1. Isotopic substitution of the tunneling hydrogen modifies the tunneling splitting down to 3.21+/-0.09 cm-1 and the vibrational ground state energy to 14 385+/-2 cm-1. The computed tunneling splittings are slightly higher than the experimental values as expected from the potential energy surface which slightly underestimates the barrier height, and they are slightly lower than the results from the instanton theory obtained using the same potential energy surface.
NASA Astrophysics Data System (ADS)
Provata, Astero; Prassas, Vassilis D.; Theodorou, Doros N.
1997-10-01
A thin liquid film of lattice fluid in equilibrium with its vapor is studied in 2 and 3 dimensions with canonical Monte Carlo simulation (MC) and Self-Consistent Field Theory (SCF) in the temperature range 0.45Tc to Tc, where Tc the liquid-gas critical temperature. Extending the approach of Oates et al. [Philos. Mag. B 61, 337 (1990)] to anisotropic systems, we develop a method for the MC computation of the transverse and normal pressure profiles, hence of the surface tension, based on virtual removals of individual sites or blocks of sites from the system. Results from implementation of this new method, obtained at very modest computational cost, are in reasonable agreement with exact values and other MC estimates of the surface tension of the 2-d and 3-d model systems, respectively. SCF estimates of the interfacial density profiles, the surface tension, the vapor pressure curve and the binodal curve compare well with MC results away from Tc, but show the expected deviations at high temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saito, M.; Suzuki, S.; Kimura, M.
Quantitative X-ray structural analysis coupled with anomalous X-ray scattering has been used for characterizing the atomic-scale structure of rust formed on steel surfaces. Samples were prepared from rust layers formed on the surfaces of two commercial steels. X-ray scattered intensity profiles of the two samples showed that the rusts consisted mainly of two types of ferric oxyhydroxide, {alpha}-FeOOH and {gamma}-FeOOH. The amounts of these rust components and the realistic atomic arrangements in the components were estimated by fitting both the ordinary and the environmental interference functions with a model structure calculated using the reverse Monte Carlo simulation technique. The twomore » rust components were found to be the network structure formed by FeO{sub 6} octahedral units, the network structure itself deviating from the ideal case. The present results also suggest that the structural analysis method using anomalous X-ray scattering and the reverse Monte Carlo technique is very successful in determining the atomic-scale structure of rusts formed on the steel surfaces.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koren, S; Bragilovski, D; Tafo, A Guemnie
Purpose: To evaluate the clinical feasibility of IntraBeam intra operative kV irradiation beam device for ocular conjunctiva treatments. The Intra-Beam system offers a 4.4 mm diameter needle applicator, that is not suitable for treatment of a large surface with limits access. We propose an adaptor that will answer to this clinical need and provide initial dosimetry. Methods: The dose distribution of the needle applicator is non uniform and hence not suitable for treatment of relatively large surfaces. We designed an adapter to the needle applicator that will filter the X-rays and produce a conformal dose distribution over the treatment areamore » while shielding surfaces to be spared. Dose distributions were simulated using FLUKA is a fully integrated particle physics Monte Carlo simulation package. Results: We designed a wedge applicator made of Polythermide window and stainless steel for collimating. We compare the dose distribution to that of the known needle and surface applicators. Conclusion: Initial dosimetry shows feasibility of this approach. While further refinements to the design may be warranted, the results support construction of a prototype and confirmation of the Monte Carlo dosimetry with measured data.« less
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Probability of misclassifying biological elements in surface waters.
Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna
2017-11-24
Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.
Ustinov, E A; Do, D D
2012-08-21
We present for the first time in the literature a new scheme of kinetic Monte Carlo method applied on a grand canonical ensemble, which we call hereafter GC-kMC. It was shown recently that the kinetic Monte Carlo (kMC) scheme is a very effective tool for the analysis of equilibrium systems. It had been applied in a canonical ensemble to describe vapor-liquid equilibrium of argon over a wide range of temperatures, gas adsorption on a graphite open surface and in graphitic slit pores. However, in spite of the conformity of canonical and grand canonical ensembles, the latter is more relevant in the correct description of open systems; for example, the hysteresis loop observed in adsorption of gases in pores under sub-critical conditions can only be described with a grand canonical ensemble. Therefore, the present paper is aimed at an extension of the kMC to open systems. The developed GC-kMC was proved to be consistent with the results obtained with the canonical kMC (C-kMC) for argon adsorption on a graphite surface at 77 K and in graphitic slit pores at 87.3 K. We showed that in slit micropores the hexagonal packing in the layers adjacent to the pore walls is observed at high loadings even at temperatures above the triple point of the bulk phase. The potential and applicability of the GC-kMC are further shown with the correct description of the heat of adsorption and the pressure tensor of the adsorbed phase.
Monte Carlo Model Insights into the Lunar Sodium Exosphere
NASA Technical Reports Server (NTRS)
Hurley, Dana M.; Killen, R. M.; Sarantos, M.
2012-01-01
Sodium in the lunar exosphere is released from the lunar regolith by several mechanisms. These mechanisms include photon stimulated desorption (PSD), impact vaporization, electron stimulated desorption, and ion sputtering. Usually, PSD dominates; however, transient events can temporarily enhance other release mechanisms so that they are dominant. Examples of transient events include meteor showers and coronal mass ejections. The interaction between sodium and the regolith is important in determining the density and spatial distribution of sodium in the lunar exosphere. The temperature at which sodium sticks to the surface is one factor. In addition, the amount of thermal accommodation during the encounter between the sodium atom and the surface affects the exospheric distribution. Finally, the fraction of particles that are stuck when the surface is cold that are rereleased when the surface warms up also affects the exospheric density. In [1], we showed the "ambient" sodium exosphere from Monte Carlo modeling with a fixed source rate and fixed surface interaction parameters. We compared the enhancement when a CME passes the Moon to the ambient conditions. Here, we compare model results to data in order to determine the source rates and surface interaction parameters that provide the best fit of the model to the data.
Sechopoulos, Ioannis; Rogers, D W O; Bazalova-Carter, Magdalena; Bolch, Wesley E; Heath, Emily C; McNitt-Gray, Michael F; Sempau, Josep; Williamson, Jeffrey F
2018-01-01
Studies involving Monte Carlo simulations are common in both diagnostic and therapy medical physics research, as well as other fields of basic and applied science. As with all experimental studies, the conditions and parameters used for Monte Carlo simulations impact their scope, validity, limitations, and generalizability. Unfortunately, many published peer-reviewed articles involving Monte Carlo simulations do not provide the level of detail needed for the reader to be able to properly assess the quality of the simulations. The American Association of Physicists in Medicine Task Group #268 developed guidelines to improve reporting of Monte Carlo studies in medical physics research. By following these guidelines, manuscripts submitted for peer-review will include a level of relevant detail that will increase the transparency, the ability to reproduce results, and the overall scientific value of these studies. The guidelines include a checklist of the items that should be included in the Methods, Results, and Discussion sections of manuscripts submitted for peer-review. These guidelines do not attempt to replace the journal reviewer, but rather to be a tool during the writing and review process. Given the varied nature of Monte Carlo studies, it is up to the authors and the reviewers to use this checklist appropriately, being conscious of how the different items apply to each particular scenario. It is envisioned that this list will be useful both for authors and for reviewers, to help ensure the adequate description of Monte Carlo studies in the medical physics literature. © 2017 American Association of Physicists in Medicine.
Analysis of Naval Ammunition Stock Positioning
2015-12-01
model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17
ERIC Educational Resources Information Center
Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael
2017-01-01
The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…
Thomas B. Lynch; Rodney E. Will; Rider Reynolds
2013-01-01
Preliminary results are given for development of an eastern redcedar (Juniperus virginiana) cubic-volume equation based on measurements of redcedar sample tree stem volume using dendrometry with Monte Carlo integration. Monte Carlo integration techniques can be used to provide unbiased estimates of stem cubic-foot volume based on upper stem diameter...
[Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].
Furuta, Takuya
2017-01-01
Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry.
Bostani, Maryam; Mueller, Jonathon W; McMillan, Kyle; Cody, Dianna D; Cagnon, Chris H; DeMarco, John J; McNitt-Gray, Michael F
2015-02-01
The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. The calculated mean percent difference between TLD measurements and Monte Carlo simulations was -4.9% with standard deviation of 8.7% and a range of -22.7% to 5.7%. The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Proposal of a method for evaluating tsunami risk using response-surface methodology
NASA Astrophysics Data System (ADS)
Fukutani, Y.
2017-12-01
Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response-surface and Monte Carlo simulation without conducting multiple tsunami numerical simulations.
NASA Astrophysics Data System (ADS)
Sengupta, D.; Gao, L.; Wilcox, E. M.; Beres, N. D.; Moosmüller, H.; Khlystov, A.
2017-12-01
Radiative forcing and climate change greatly depends on earth's surface albedo and its temporal and spatial variation. The surface albedo varies greatly depending on the surface characteristics ranging from 5-10% for calm ocean waters to 80% for some snow-covered areas. Clean and fresh snow surfaces have the highest albedo and are most sensitive to contamination with light absorbing impurities that can greatly reduce surface albedo and change overall radiative forcing estimates. Accurate estimation of snow albedo as well as understanding of feedbacks on climate from changes in snow-covered areas is important for radiative forcing, snow energy balance, predicting seasonal snowmelt, and run off rates. Such information is essential to inform timely decision making of stakeholders and policy makers. Light absorbing particles deposited onto the snow surface can greatly alter snow albedo and have been identified as a major contributor to regional climate forcing if seasonal snow cover is involved. However, uncertainty associated with quantification of albedo reduction by these light absorbing particles is high. Here, we use Mie theory (under the assumption of spherical snow grains) to reconstruct the single scattering parameters of snow (i.e., single scattering albedo ῶ and asymmetry parameter g) from observation-based size distribution information and retrieved refractive index values. The single scattering parameters of impurities are extracted with the same approach from datasets obtained during laboratory combustion of biomass samples. Instead of using plane-parallel approximation methods to account for multiple scattering, we have used the simple "Monte Carlo ray/photon tracing approach" to calculate the snow albedo. This simple approach considers multiple scattering to be the "collection" of single scattering events. Using this approach, we vary the effective snow grain size and impurity concentrations to explore the evolution of snow albedo over a wide wavelength range (300 nm - 2000 nm). Results will be compared with the SNICAR model to better understand the differences in snow albedo computation between plane-parallel methods and the statistical Monte Carlo methods.
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Diagnosing Undersampling Biases in Monte Carlo Eigenvalue and Flux Tally Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M.; Rearden, Bradley T.; Marshall, William J.
2017-02-08
Here, this study focuses on understanding the phenomena in Monte Carlo simulations known as undersampling, in which Monte Carlo tally estimates may not encounter a sufficient number of particles during each generation to obtain unbiased tally estimates. Steady-state Monte Carlo simulations were performed using the KENO Monte Carlo tools within the SCALE code system for models of several burnup credit applications with varying degrees of spatial and isotopic complexities, and the incidence and impact of undersampling on eigenvalue and flux estimates were examined. Using an inadequate number of particle histories in each generation was found to produce a maximum bias of ~100 pcm in eigenvalue estimates and biases that exceeded 10% in fuel pin flux tally estimates. Having quantified the potential magnitude of undersampling biases in eigenvalue and flux tally estimates in these systems, this study then investigated whether Markov Chain Monte Carlo convergence metrics could be integrated into Monte Carlo simulations to predict the onset and magnitude of undersampling biases. Five potential metrics for identifying undersampling biases were implemented in the SCALE code system and evaluated for their ability to predict undersampling biases by comparing the test metric scores with the observed undersampling biases. Finally, of the five convergence metrics that were investigated, three (the Heidelberger-Welch relative half-width, the Gelman-Rubin more » $$\\hat{R}_c$$ diagnostic, and tally entropy) showed the potential to accurately predict the behavior of undersampling biases in the responses examined.« less
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
ERIC Educational Resources Information Center
Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.
2004-01-01
The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…
Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.
Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A
2005-01-01
The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.
Monte-Carlo simulations of a coarse-grained model for α-oligothiophenes
NASA Astrophysics Data System (ADS)
Almutairi, Amani; Luettmer-Strathmann, Jutta
The interfacial layer of an organic semiconductor in contact with a metal electrode has important effects on the performance of thin-film devices. However, the structure of this layer is not easy to model. Oligothiophenes are small, π-conjugated molecules with applications in organic electronics that also serve as small-molecule models for polythiophenes. α-hexithiophene (6T) is a six-ring molecule, whose adsorption on noble metal surfaces has been studied extensively (see, e.g., Ref.). In this work, we develop a coarse-grained model for α-oligothiophenes. We describe the molecules as linear chains of bonded, discotic particles with Gay-Berne potential interactions between non-bonded ellipsoids. We perform Monte Carlo simulations to study the structure of isolated and adsorbed molecules
Morse Monte Carlo Radiation Transport Code System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one maymore » determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)« less
Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A
2011-07-01
The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do. © 2011 Acoustical Society of America
Yeo, Sang Chul; Lo, Yu Chieh; Li, Ju; Lee, Hyuck Mo
2014-10-07
Ammonia (NH3) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (Eb) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (Eb) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH3 nitridation rate on the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH3 nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH3 nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH3 nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.
Croteau, T; Bertram, A K; Patey, G N
2008-10-30
Grand canonical Monte Carlo calculations are used to determine water adsorption and structure on defect-free kaolinite surfaces as a function of relative humidity at 235 K. This information is then used to gain insight into ice nucleation on kaolinite surfaces. Results for both the SPC/E and TIP5P-E water models are compared and demonstrate that the Al-surface [(001) plane] and both protonated and unprotonated edges [(100) plane] strongly adsorb at atmospherically relevant relative humidities. Adsorption on the Al-surface exhibits properties of a first-order process with evidence of collective behavior, whereas adsorption on the edges is essentially continuous and appears dominated by strong water lattice interactions. For the protonated and unprotonated edges no structure that matches hexagonal ice is observed. For the Al-surface some of the water molecules formed hexagonal rings. However, the a o lattice parameter for these rings is significantly different from the corresponding constant for hexagonal ice ( Ih). A misfit strain of 14.0% is calculated between the hexagonal pattern of water adsorbed on the Al-surface and the basal plane of ice Ih. Hence, the ring structures that form on the Al-surface are not expected to be good building-blocks for ice nucleation due to the large misfit strain.
Neural network simulation of the atmospheric point spread function for the adjacency effect research
NASA Astrophysics Data System (ADS)
Ma, Xiaoshan; Wang, Haidong; Li, Ligang; Yang, Zhen; Meng, Xin
2016-10-01
Adjacency effect could be regarded as the convolution of the atmospheric point spread function (PSF) and the surface leaving radiance. Monte Carlo is a common method to simulate the atmospheric PSF. But it can't obtain analytic expression and the meaningful results can be only acquired by statistical analysis of millions of data. A backward Monte Carlo algorithm was employed to simulate photon emitting and propagating in the atmosphere under different conditions. The PSF was determined by recording the photon-receiving numbers in fixed bin at different position. A multilayer feed-forward neural network with a single hidden layer was designed to learn the relationship between the PSF's and the input condition parameters. The neural network used the back-propagation learning rule for training. Its input parameters involved atmosphere condition, spectrum range, observing geometry. The outputs of the network were photon-receiving numbers in the corresponding bin. Because the output units were too many to be allowed by neural network, the large network was divided into a collection of smaller ones. These small networks could be ran simultaneously on many workstations and/or PCs to speed up the training. It is important to note that the simulated PSF's by Monte Carlo technique in non-nadir viewing angles are more complicated than that in nadir conditions which brings difficulties in the design of the neural network. The results obtained show that the neural network approach could be very useful to compute the atmospheric PSF based on the simulated data generated by Monte Carlo method.
Compressible or incompressible blend of interacting monodisperse linear polymers near a surface.
Batman, Richard; Gujrati, P D
2007-08-28
We consider a lattice model of a mixture of repulsive, attractive, or neutral monodisperse linear polymers of two species, A and B, with a third monomeric species C, which may be taken to represent free volume. The mixture is confined between two hard, parallel plates of variable separation whose interactions with A and C may be attractive, repulsive, or neutral, and may be different from each other. The interactions with A and C are all that are required to completely specify the effect of each surface on all three components. We numerically study various density profiles as we move away from the surface, by using the recursive method of Gujrati and Chhajer [J. Chem. Phys. 106, 5599 (1997)] that has already been previously applied to study polydisperse solutions and blends next to surfaces. The resulting density profiles show the oscillations that are seen in Monte Carlo simulations and the enrichment of the smaller species at a neutral surface. The method is computationally ultrafast and can be carried out on a personal computer (PC), even in the incompressible case, when Monte Carlo simulations are not feasible. The calculations of density profiles usually take less than 20 min on a PC.
NASA Astrophysics Data System (ADS)
Xiong, Chuan; Shi, Jiancheng
2014-01-01
To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing.
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
NASA Astrophysics Data System (ADS)
Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.
Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark
2010-05-01
We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis
2016-03-01
Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.
The diffusion of a Ga atom on GaAs(001)β2(2 × 4): Local superbasin kinetic Monte Carlo
NASA Astrophysics Data System (ADS)
Lin, Yangzheng; Fichthorn, Kristen A.
2017-10-01
We use first-principles density-functional theory to characterize the binding sites and diffusion mechanisms for a Ga adatom on the GaAs(001)β 2(2 × 4) surface. Diffusion in this system is a complex process involving eleven unique binding sites and sixteen different hops between neighboring binding sites. Among the binding sites, we can identify four different superbasins such that the motion between binding sites within a superbasin is much faster than hops exiting the superbasin. To describe diffusion, we use a recently developed local superbasin kinetic Monte Carlo (LSKMC) method, which accelerates a conventional kinetic Monte Carlo (KMC) simulation by describing the superbasins as absorbing Markov chains. We find that LSKMC is up to 4300 times faster than KMC for the conditions probed in this study. We characterize the distribution of exit times from the superbasins and find that these are sometimes, but not always, exponential and we characterize the conditions under which the superbasin exit-time distribution should be exponential. We demonstrate that LSKMC simulations assuming an exponential superbasin exit-time distribution yield the same diffusion coefficients as conventional KMC.
An off-lattice, self-learning kinetic Monte Carlo method using local environments.
Konwar, Dhrubajit; Bhute, Vijesh J; Chatterjee, Abhijit
2011-11-07
We present a method called local environment kinetic Monte Carlo (LE-KMC) method for efficiently performing off-lattice, self-learning kinetic Monte Carlo (KMC) simulations of activated processes in material systems. Like other off-lattice KMC schemes, new atomic processes can be found on-the-fly in LE-KMC. However, a unique feature of LE-KMC is that as long as the assumption that all processes and rates depend only on the local environment is satisfied, LE-KMC provides a general algorithm for (i) unambiguously describing a process in terms of its local atomic environments, (ii) storing new processes and environments in a catalog for later use with standard KMC, and (iii) updating the system based on the local information once a process has been selected for a KMC move. Search, classification, storage and retrieval steps needed while employing local environments and processes in the LE-KMC method are discussed. The advantages and computational cost of LE-KMC are discussed. We assess the performance of the LE-KMC algorithm by considering test systems involving diffusion in a submonolayer Ag and Ag-Cu alloy films on Ag(001) surface.
Monte Carlo based electron treatment planning and cutout output factor calculations
NASA Astrophysics Data System (ADS)
Mitrou, Ellis
Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.
NASA Astrophysics Data System (ADS)
Brunetti, Antonio; Depalmas, Anna; di Gennaro, Francesco; Serges, Alessandra; Schiavon, Nicola
2016-07-01
The chemical composition of a unique bronze artifact known as the "Cesta" ("Basket") belonging to the ancient Nuragic civilization of the Island of Sardinia, Italy has been analyzed by combining X-ray Fluorescence Spectroscopy (XRF) with Monte Carlo simulations using the XRMC code. The "Cesta" had been discovered probably in the XVIII century with the first graphic representation reported around 1761. In a later draft (dated 1764), the basket has been depicted as being carried upside-down on the shoulder of a large bronze warrior Barthélemy (1761), Pinza (1901), Winckelmann (1776) . The two pictorial representations differed only by the presence of handles in the most recent one. XRF measurements revealed that the handles of the object are composed by brass while the other parts are composed by bronze suggesting the handles as being a later addition to the original object. The artifact is covered at its surface by a fairly thick corrosion patina. In order to determine the bronze bulk composition without the need for removing the outer patina, the artifact has been modeled as a two layer object in Monte Carlo simulations.
Teaching the Growth, Ripening, and Agglomeration of Nanostructures in Computer Experiments
ERIC Educational Resources Information Center
Meyburg, Jan Philipp; Diesing, Detlef
2017-01-01
This article describes the implementation and application of a metal deposition and surface diffusion Monte Carlo simulation in a physical chemistry lab course. Here the self-diffusion of Ag atoms on a Ag(111) surface is modeled and compared to published experimental results. Both the thin-film homoepitaxial growth during adatom deposition onto a…
Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.
Yuan, J; Moses, G A; McKenty, P W
2005-10-01
A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.
Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.; ...
2018-04-19
QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less
Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweezy, Jeremy Ed
2016-01-21
The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less
Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave
NASA Astrophysics Data System (ADS)
Yasuda, Shugo
2017-02-01
A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.
QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Rourke, Patrick Francis
The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
MR Imaging Based Treatment Planning for Radiotherapy of Prostate Cancer
2007-02-01
developed practical methods for heterogeneity correction for MRI - based dose calculations (Chen et al 2007). 6) We will use existing Monte Carlo ... Monte Carlo verification of IMRT dose distributions from a commercial treatment planning optimization system, Phys. Med. Biol., 45:2483-95 (2000) Ma...accuracy and consistency for MR based IMRT treatment planning for prostate cancer. A short paper entitled “ Monte Carlo dose verification of MR image based
Perturbative two- and three-loop coefficients from large β Monte Carlo
NASA Astrophysics Data System (ADS)
Lepage, G. P.; Mackenzie, P. B.; Shakespeare, N. H.; Trottier, H. D.
Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large β on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z3 tunneling.
Perturbative two- and three-loop coefficients from large b Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
G.P. Lepage; P.B. Mackenzie; N.H. Shakespeare
1999-10-18
Perturbative coefficients for Wilson loops and the static quark self-energy are extracted from Monte Carlo simulations at large {beta} on finite volumes, where all the lattice momenta are large. The Monte Carlo results are in excellent agreement with perturbation theory through second order. New results for third order coefficients are reported. Twisted boundary conditions are used to eliminate zero modes and to suppress Z{sub 3} tunneling.
Fission matrix-based Monte Carlo criticality analysis of fuel storage pools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.
2013-07-01
Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for allmore » exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.« less
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
NASA Astrophysics Data System (ADS)
Wirtz, Ludger; Reinhold, Carlos O.; Lemell, Christoph; Burgdörfer, Joachim
2003-01-01
We present a simulation of the neutralization of highly charged ions in front of a lithium fluoride surface including the close-collision regime above the surface. The present approach employs a Monte Carlo solution of the Liouville master equation for the joint probability density of the ionic motion and the electronic population of the projectile and the target surface. It includes single as well as double particle-hole (de)excitation processes and incorporates electron correlation effects through the conditional dynamics of population strings. The input in terms of elementary one- and two-electron transfer rates is determined from classical trajectory Monte Carlo calculations as well as quantum-mechanical Auger calculations. For slow projectiles and normal incidence, the ionic motion depends sensitively on the interplay between image acceleration towards the surface and repulsion by an ensemble of positive hole charges in the surface (“trampoline effect”). For Ne10+ we find that image acceleration is dominant and no collective backscattering high above the surface takes place. For grazing incidence, our simulation delineates the pathways to complete neutralization. In accordance with recent experimental observations, most ions are reflected as neutral or even as singly charged negative particles, irrespective of the charge state of the incoming ions.
Verma, Chandrabhan; Quraishi, M. A.; Kluza, K.; Makowska-Janusik, M.; Olasunkanmi, Lukman O.; Ebenso, Eno E.
2017-01-01
D-glucose derivatives of dihydropyrido-[2,3-d:6,5-d′]-dipyrimidine-2, 4, 6, 8(1H,3H, 5H,7H)-tetraone (GPHs) have been synthesized and investigated as corrosion inhibitors for mild steel in 1M HCl solution using gravimetric, electrochemical, surface, quantum chemical calculations and Monte Carlo simulations methods. The order of inhibition efficiencies is GPH-3 > GPH-2 > GPH-1. The results further showed that the inhibitor molecules with electron releasing (-OH, -OCH3) substituents exhibit higher efficiency than the parent molecule without any substituents. Polarization study suggests that the studied compounds are mixed-type but exhibited predominantly cathodic inhibitive effect. The adsorption of these compounds on mild steel surface obeyed the Langmuir adsorption isotherm. SEM, EDX and AFM analyses were used to confirm the inhibitive actions of the molecules on mild steel surface. Quantum chemical (QC) calculations and Monte Carlo (MC) simulations studies were undertaken to further corroborate the experimental results. PMID:28317849
multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2017-11-01
Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.
Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.
Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M
2012-01-01
Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.
Response Matrix Monte Carlo for electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.
1990-11-01
A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less
2014-03-27
VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR... PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR MEASUREMENTS OF AN IRON BOX THESIS Presented to the Faculty Department of Engineering...STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED iv AFIT-ENP-14-M-05 VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzidakis, Stylianos; Greulich, Christopher
A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.
Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Ritsch, E.; Atlas Collaboration
2014-06-01
The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.
Rapid Monte Carlo Simulation of Gravitational Wave Galaxies
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2015-01-01
With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.
NASA Astrophysics Data System (ADS)
Cortés, Joaquin; Valencia, Eliana
1997-07-01
Monte Carlo experiments are used to investigate the adsorption of argon on a heterogeneous solid with a periodic distribution of surface energy. A study is made of the relation between the adsorbate molecule's diameter and the distance between the sites of maximum surface energy on the critical temperature, the observed phase changes, and the commensurability of the surface phase structure determined in the simulation.
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Montemore » Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, Anna; Yin, Fang-Fang; Wu, Qiuwen, E-mail: Qiuwen.Wu@Duke.edu
2015-05-15
Purpose: To develop a framework for accurate electron Monte Carlo dose calculation. In this study, comprehensive validations of vendor provided electron beam phase space files for Varian TrueBeam Linacs against measurement data are presented. Methods: In this framework, the Monte Carlo generated phase space files were provided by the vendor and used as input to the downstream plan-specific simulations including jaws, electron applicators, and water phantom computed in the EGSnrc environment. The phase space files were generated based on open field commissioning data. A subset of electron energies of 6, 9, 12, 16, and 20 MeV and open and collimatedmore » field sizes 3 × 3, 4 × 4, 5 × 5, 6 × 6, 10 × 10, 15 × 15, 20 × 20, and 25 × 25 cm{sup 2} were evaluated. Measurements acquired with a CC13 cylindrical ionization chamber and electron diode detector and simulations from this framework were compared for a water phantom geometry. The evaluation metrics include percent depth dose, orthogonal and diagonal profiles at depths R{sub 100}, R{sub 50}, R{sub p}, and R{sub p+} for standard and extended source-to-surface distances (SSD), as well as cone and cut-out output factors. Results: Agreement for the percent depth dose and orthogonal profiles between measurement and Monte Carlo was generally within 2% or 1 mm. The largest discrepancies were observed within depths of 5 mm from phantom surface. Differences in field size, penumbra, and flatness for the orthogonal profiles at depths R{sub 100}, R{sub 50}, and R{sub p} were within 1 mm, 1 mm, and 2%, respectively. Orthogonal profiles at SSDs of 100 and 120 cm showed the same level of agreement. Cone and cut-out output factors agreed well with maximum differences within 2.5% for 6 MeV and 1% for all other energies. Cone output factors at extended SSDs of 105, 110, 115, and 120 cm exhibited similar levels of agreement. Conclusions: We have presented a Monte Carlo simulation framework for electron beam dose calculations for Varian TrueBeam Linacs. Electron beam energies of 6 to 20 MeV for open and collimated field sizes from 3 × 3 to 25 × 25 cm{sup 2} were studied and results were compared to the measurement data with excellent agreement. Application of this framework can thus be used as the platform for treatment planning of dynamic electron arc radiotherapy and other advanced dynamic techniques with electron beams.« less
Rodrigues, Anna; Sawkey, Daren; Yin, Fang-Fang; Wu, Qiuwen
2015-05-01
To develop a framework for accurate electron Monte Carlo dose calculation. In this study, comprehensive validations of vendor provided electron beam phase space files for Varian TrueBeam Linacs against measurement data are presented. In this framework, the Monte Carlo generated phase space files were provided by the vendor and used as input to the downstream plan-specific simulations including jaws, electron applicators, and water phantom computed in the EGSnrc environment. The phase space files were generated based on open field commissioning data. A subset of electron energies of 6, 9, 12, 16, and 20 MeV and open and collimated field sizes 3 × 3, 4 × 4, 5 × 5, 6 × 6, 10 × 10, 15 × 15, 20 × 20, and 25 × 25 cm(2) were evaluated. Measurements acquired with a CC13 cylindrical ionization chamber and electron diode detector and simulations from this framework were compared for a water phantom geometry. The evaluation metrics include percent depth dose, orthogonal and diagonal profiles at depths R100, R50, Rp, and Rp+ for standard and extended source-to-surface distances (SSD), as well as cone and cut-out output factors. Agreement for the percent depth dose and orthogonal profiles between measurement and Monte Carlo was generally within 2% or 1 mm. The largest discrepancies were observed within depths of 5 mm from phantom surface. Differences in field size, penumbra, and flatness for the orthogonal profiles at depths R100, R50, and Rp were within 1 mm, 1 mm, and 2%, respectively. Orthogonal profiles at SSDs of 100 and 120 cm showed the same level of agreement. Cone and cut-out output factors agreed well with maximum differences within 2.5% for 6 MeV and 1% for all other energies. Cone output factors at extended SSDs of 105, 110, 115, and 120 cm exhibited similar levels of agreement. We have presented a Monte Carlo simulation framework for electron beam dose calculations for Varian TrueBeam Linacs. Electron beam energies of 6 to 20 MeV for open and collimated field sizes from 3 × 3 to 25 × 25 cm(2) were studied and results were compared to the measurement data with excellent agreement. Application of this framework can thus be used as the platform for treatment planning of dynamic electron arc radiotherapy and other advanced dynamic techniques with electron beams.
NASA Astrophysics Data System (ADS)
Luque-Caballero, Germán; Martín-Molina, Alberto; Quesada-Pérez, Manuel
2014-05-01
Both experiments and theory have evidenced that multivalent cations can mediate the interaction between negatively charged polyelectrolytes and like-charged objects, such as anionic lipoplexes (DNA-cation-anionic liposome complexes). In this paper, we use Monte Carlo simulations to study the electrostatic interaction responsible for the trivalent-counterion-mediated adsorption of polyelectrolytes onto a like-charged planar surface. The evaluation of the Helmholtz free energy allows us to characterize both the magnitude and the range of the interaction as a function of the polyelectrolyte charge, surface charge density, [3:1] electrolyte concentration, and cation size. Both polyelectrolyte and surface charge favor the adsorption. It should be stressed, however, that the adsorption will be negligible if the surface charge density does not exceed a threshold value. The effect of the [3:1] electrolyte concentration has also been analyzed. In certain range of concentrations, the counterion-mediated attraction seems to be independent of this parameter, whereas very high concentrations of salt weaken the adsorption. If the trivalent cation diameter is doubled the adsorption moderates due to the excluded volume effects. The analysis of the integrated charge density and ionic distributions suggests that a delicate balance between charge inversion and screening effects governs the polyelectrolyte adsorption onto like-charged surfaces mediated by trivalent cations.
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Structures of small Pd Pt bimetallic clusters by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Cheng, Daojian; Huang, Shiping; Wang, Wenchuan
2006-11-01
Segregation phenomena of Pd-Pt bimetallic clusters with icosahedral and decahedral structures are investigated by using Monte Carlo method based on the second-moment approximation of the tight-binding (TB-SMA) potentials. The simulation results indicate that the Pd atoms generally lie on the surface of the smaller clusters. The three-shell onion-like structures are observed in 55-atom Pd-Pt bimetallic clusters, in which a single Pd atom is located in the center, and the Pt atoms are in the middle shell, while the Pd atoms are enriched on the surface. With the increase of Pd mole fraction in 55-atom Pd-Pt bimetallic clusters, the Pd atoms occupy the vertices of clusters first, then edge and center sites, and finally the interior shell. It is noticed that some decahedral structures can be transformed into the icosahedron-like structure at 300 and 500 K. Comparisons are made with previous experiments and theoretical studies of Pd-Pt bimetallic clusters.
Quantum Monte Carlo studies of solvated systems
NASA Astrophysics Data System (ADS)
Schwarz, Kathleen; Letchworth Weaver, Kendra; Arias, T. A.; Hennig, Richard G.
2011-03-01
Solvation qualitatively alters the energetics of diverse processes from protein folding to reactions on catalytic surfaces. An explicit description of the solvent in quantum-mechanical calculations requires both a large number of electrons and exploration of a large number of configurations in the phase space of the solvent. These problems can be circumvented by including the effects of solvent through a rigorous classical density-functional description of the liquid environment, thereby yielding free energies and thermodynamic averages directly, while eliminating the need for explicit consideration of the solvent electrons. We have implemented and tested this approach within the CASINO Quantum Monte Carlo code. Our method is suitable for calculations in any basis within CASINO, including b-spline and plane wave trial wavefunctions, and is equally applicable to molecules, surfaces, and crystals. For our preliminary test calculations, we use a simplified description of the solvent in terms of an isodensity continuum dielectric solvation approach, though the method is fully compatible with more reliable descriptions of the solvent we shall employ in the future.
Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Rault, Didier F. G.
1994-01-01
A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.
A global reaction route mapping-based kinetic Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Mitchell, Izaac; Irle, Stephan; Page, Alister J.
2016-07-01
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.
Magnetization switching process in a torus nanoring with easy-plane surface anisotropy
NASA Astrophysics Data System (ADS)
Alzate-Cardona, J. D.; Sabogal-Suárez, D.; Restrepo-Parra, E.
2017-11-01
We have studied the effects of surface shape anisotropy in the magnetization behavior of a torus nanoring by means of Monte Carlo simulations. Stable states (vortex and reverse vortex states) and metastable states (onion and asymmetric onion states) were found in the torus nanoring. The probability of occurrence of the metastable states (stable states) tends to decrease (increase) as the amount of Monte Carlo steps per spin, temperature steps and negative values of the anisotropy constant increase. We evaluated under which conditions it is possible to switch the magnetic state of the torus nanoring from a vortex to a reverse vortex state by applying a circular magnetic field at certain temperature interval. The switching probability (from a vortex to a reverse vortex state) depends on the value of the current intensity, which generates the circular magnetic field, and the temperature interval where the magnetic field is applied. There is a linear relationship between the current intensity and the minimum temperature interval above which the vortex state can be switched.
Non-Thermal Spectra from Pulsar Magnetospheres in the Full Electromagnetic Cascade Scenario
NASA Astrophysics Data System (ADS)
Peng, Qi-Yong; Zhang, Li
2008-08-01
We simulated non-thermal emission from a pulsar magnetosphere within the framework of a full polar-cap cascade scenario by taking the acceleration gap into account, using the Monte Carlo method. For a given electric field parallel to open field lines located at some height above the surface of a neutron star, primary electrons were accelerated by parallel electric fields and lost their energies by curvature radiation; these photons were converted to electron-positron pairs, which emitted photons through subsequent quantum synchrotron radiation and inverse Compton scattering, leading to a cascade. In our calculations, the acceleration gap was assumed to be high above the stellar surface (about several stellar radii); the primary and secondary particles and photons emitted during the journey of those particles in the magnetosphere were traced using the Monte Carlo method. In such a scenario, we calculated the non-thermal photon spectra for different pulsar parameters and compared the model results for two normal pulsars and one millisecond pulsar with the observed data.
NASA Astrophysics Data System (ADS)
Patrone, Paul; Einstein, T. L.; Margetis, Dionisios
2011-03-01
We study a 1+1D, stochastic, Burton-Cabrera-Frank (BCF) model of interacting steps fluctuating on a vicinal crystal. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. Our goal is to formulate and validate a self-consistent mean-field (MF) formalism to approximately solve the system of coupled, nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. We derive formulas for the time-dependent terrace width distribution (TWD) and its steady-state limit. By comparison with kinetic Monte-Carlo simulations, we show that our MF formalism improves upon models in which step interactions are linearized. We also indicate how fitting parameters of our steady state MF TWD may be used to determine the mass transport regime and step interaction energy of certain experimental systems. PP and TLE supported by NSF MRSEC under Grant DMR 05-20471 at U. of Maryland; DM supported by NSF under Grant DMS 08-47587.
A global reaction route mapping-based kinetic Monte Carlo algorithm.
Mitchell, Izaac; Irle, Stephan; Page, Alister J
2016-07-14
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.
Breakdown of the Migdal-Eliashberg theory: A determinant quantum Monte Carlo study
Esterlis, I.; Nosarzewski, B.; Huang, E. W.; ...
2018-04-02
The superconducting (SC) and charge-density-wave (CDW) susceptibilities of the two-dimensional Holstein model are computed using determinant quantum Monte Carlo, and compared with results computed using the Migdal-Eliashberg (ME) approach. We access temperatures as low as 25 times less than the Fermi energy, E F, which are still above the SC transition. We find that the SC susceptibility at low T agrees quantitatively with the ME theory up to a dimensionless electron-phonon coupling λ 0 ≈ 0.4 but deviates dramatically for larger λ 0. We find that for large λ 0 and small phonon frequency ω 0 << E F CDWmore » ordering is favored and the preferred CDW ordering vector is uncorrelated with any obvious feature of the Fermi surface.« less
Parameters estimation for reactive transport: A way to test the validity of a reactive model
NASA Astrophysics Data System (ADS)
Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme
The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.
Breakdown of the Migdal-Eliashberg theory: A determinant quantum Monte Carlo study
NASA Astrophysics Data System (ADS)
Esterlis, I.; Nosarzewski, B.; Huang, E. W.; Moritz, B.; Devereaux, T. P.; Scalapino, D. J.; Kivelson, S. A.
2018-04-01
The superconducting (SC) and charge-density-wave (CDW) susceptibilities of the two-dimensional Holstein model are computed using determinant quantum Monte Carlo, and compared with results computed using the Migdal-Eliashberg (ME) approach. We access temperatures as low as 25 times less than the Fermi energy, EF, which are still above the SC transition. We find that the SC susceptibility at low T agrees quantitatively with the ME theory up to a dimensionless electron-phonon coupling λ0≈0.4 but deviates dramatically for larger λ0. We find that for large λ0 and small phonon frequency ω0≪EF CDW ordering is favored and the preferred CDW ordering vector is uncorrelated with any obvious feature of the Fermi surface.
Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.
2016-01-01
This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.
Zürcher, Lukas; Bookstrom, Arthur A.; Hammarstrom, Jane M.; Mars, John C.; Ludington, Stephen; Zientek, Michael L.; Dunlap, Pamela; Wallis, John C.; Drew, Lawrence J.; Sutphin, David M.; Berger, Byron R.; Herrington, Richard J.; Billa, Mario; Kuşcu, Ilkay; Moon, Charles J.; Richards, Jeremy P.; Zientek, Michael L.; Hammarstrom, Jane M.; Johnson, Kathleen M.
2015-11-18
The assessment estimates that the Tethys region contains 47 undiscovered deposits within 1 kilometer of the surface. Probabilistic estimates of numbers of undiscovered deposits were combined with grade and tonnage models in a Monte Carlo simulation to estimate probable amounts of contained metal. The 47 undiscovered deposits are estimated to contain a mean of 180 million metric tons (Mt) of copper distributed among the 18 tracts for which probabilistic estimates were made, in addition to the 62 Mt of copper already identified in the 42 known porphyry deposits in the study area. Results of Monte Carlo simulations show that 80 percent of the estimated undiscovered porphyry copper resources in the Tethys region are located in four tracts or sub-tracts.
Mobit, P
2002-01-01
The energy responses of LiF-TLDs irradiated in megavoltage electron and photon beams have been determined experimentally by many investigators over the past 35 years but the results vary considerably. General cavity theory has been used to model some of the experimental findings but the predictions of these cavity theories differ from each other and from measurements by more than 13%. Recently, two groups or investigators using Monte Carlo simulations and careful experimental techniques showed that the energy response of 1 mm or 2 mm thick LiF-TLD irradiated by megavoltage photon and electron beams is not more than 5% less than unity for low-Z phantom materials like water or Perspex. However, when the depth of irradiation is significantly different from dmax and the TLD size is more than 5 mm, then the energy response is up to 12% less than unity for incident electron beams. Monte Carlo simulations of some of the experiments reported in the literature showed that some of the contradictory experimental results are reproducible with Monte Carlo simulations. Monte Carlo simulations show that the energy response of LiF-TLDs depends on the size of detector used in electron beams, the depth of irradiation and the incident electron energy. Other differences can be attributed to absolute dose determination and precision of the TL technique. Monte Carlo simulations have also been used to evaluate some of the published general cavity theories. The results show that some of the parameters used to evaluate Burlin's general cavity theory are wrong by factor of 3. Despite this, the estimation of the energy response for most clinical situations using Burlin's cavity equation agrees with Monte Carlo simulations within 1%.
Renner, Franziska
2016-09-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.
TASEP of interacting particles of arbitrary size
NASA Astrophysics Data System (ADS)
Narasimhan, S. L.; Baumgaertner, A.
2017-10-01
A mean-field description of the stationary state behaviour of interacting k-mers performing totally asymmetric exclusion processes (TASEP) on an open lattice segment is presented employing the discrete Takahashi formalism. It is shown how the maximal current and the phase diagram, including triple-points, depend on the strength of repulsive and attractive interactions. We compare the mean-field results with Monte Carlo simulation of three types interacting k-mers: monomers, dimers and trimers. (a) We find that the Takahashi estimates of the maximal current agree quantitatively with those of the Monte Carlo simulation in the absence of interaction as well as in both the the attractive and the strongly repulsive regimes. However, theory and Monte Carlo results disagree in the range of weak repulsion, where the Takahashi estimates of the maximal current show a monotonic behaviour, whereas the Monte Carlo data show a peaking behaviour. It is argued that the peaking of the maximal current is due to a correlated motion of the particles. In the limit of very strong repulsion the theory predicts a universal behavior: th maximal currents of k-mers correspond to that of non-interacting (k+1) -mers; (b) Monte Carlo estimates of the triple-points for monomers, dimers and trimers show an interesting general behaviour : (i) the phase boundaries α * and β* for entry and exit current, respectively, as function of interaction strengths show maxima for α* whereas β * exhibit minima at the same strength; (ii) in the attractive regime, however, the trend is reversed (β * > α * ). The Takahashi estimates of the triple-point for monomers show a similar trend as the Monte Carlo data except for the peaking of α * ; for dimers and trimers, however, the Takahashi estimates show an opposite trend as compared to the Monte Carlo data.
RNA folding kinetics using Monte Carlo and Gillespie algorithms.
Clote, Peter; Bayegan, Amir H
2018-04-01
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
Effect of the multiple scattering of electrons in Monte Carlo simulation of LINACS.
Vilches, Manuel; García-Pareja, Salvador; Guerrero, Rafael; Anguiano, Marta; Lallena, Antonio M
2008-01-01
Results obtained from Monte Carlo simulations of the transport of electrons in thin slabs of dense material media and air slabs with different widths are analyzed. Various general purpose Monte Carlo codes have been used: PENELOPE, GEANT3, GEANT4, EGSNRC, MCNPX. Non-negligible differences between the angular and radial distributions after the slabs have been found. The effects of these differences on the depth doses measured in water are also discussed.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Monte Carlos of the new generation: status and progress
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frixione, Stefano
2005-03-22
Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron.
Monte Carlo simulation of aorta autofluorescence
NASA Astrophysics Data System (ADS)
Kuznetsova, A. A.; Pushkareva, A. E.
2016-08-01
Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Ellis; Derek Gaston; Benoit Forget
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Combined experimental and Monte Carlo verification of
brachytherapy plans for vaginal applicators
NASA Astrophysics Data System (ADS)
Sloboda, Ron S.; Wang, Ruqing
1998-12-01
Dose rates in a phantom around a shielded and an unshielded vaginal applicator containing Selectron low-dose-rate
sources were determined by experiment and Monte Carlo simulation. Measurements were performed with thermoluminescent dosimeters in a white polystyrene phantom using an experimental protocol geared for precision. Calculations for the same set-up were done using a version of the EGS4 Monte Carlo code system modified for brachytherapy applications into which a new combinatorial geometry package developed by Bielajew was recently incorporated. Measured dose rates agree with Monte Carlo estimates to within 5% (1 SD) for the unshielded applicator, while highlighting some experimental uncertainties for the shielded applicator. Monte Carlo calculations were also done to determine a value for the effective transmission of the shield required for clinical treatment planning, and to estimate the dose rate in water at points in axial and sagittal planes transecting the shielded applicator. Comparison with dose rates generated by the planning system indicates that agreement is better than 5% (1 SD) at most positions. The precision thermoluminescent dosimetry protocol and modified Monte Carlo code are effective complementary tools for brachytherapy applicator dosimetry.
Monte Carlo modelling the dosimetric effects of electrode material on diamond detectors.
Baluti, Florentina; Deloar, Hossain M; Lansley, Stuart P; Meyer, Juergen
2015-03-01
Diamond detectors for radiation dosimetry were modelled using the EGSnrc Monte Carlo code to investigate the influence of electrode material and detector orientation on the absorbed dose. The small dimensions of the electrode/diamond/electrode detector structure required very thin voxels and the use of non-standard DOSXYZnrc Monte Carlo model parameters. The interface phenomena was investigated by simulating a 6 MV beam and detectors with different electrode materials, namely Al, Ag, Cu and Au, with thickens of 0.1 µm for the electrodes and 0.1 mm for the diamond, in both perpendicular and parallel detector orientation with regards to the incident beam. The smallest perturbations were observed for the parallel detector orientation and Al electrodes (Z = 13). In summary, EGSnrc Monte Carlo code is well suited for modelling small detector geometries. The Monte Carlo model developed is a useful tool to investigate the dosimetric effects caused by different electrode materials. To minimise perturbations cause by the detector electrodes, it is recommended that the electrodes should be made from a low-atomic number material and placed parallel to the beam direction.
Physical Principle for Generation of Randomness
NASA Technical Reports Server (NTRS)
Zak, Michail
2009-01-01
A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
The structure of liquid water by polarized neutron diffraction and reverse Monte Carlo modelling.
Temleitner, László; Pusztai, László; Schweika, Werner
2007-08-22
The coherent static structure factor of water has been investigated by polarized neutron diffraction. Polarization analysis allows us to separate the huge incoherent scattering background from hydrogen and to obtain high quality data of the coherent scattering from four different mixtures of liquid H(2)O and D(2)O. The information obtained by the variation of the scattering contrast confines the configurational space of water and is used by the reverse Monte Carlo technique to model the total structure factors. Structural characteristics have been calculated directly from the resulting sets of particle coordinates. Consistency with existing partial pair correlation functions, derived without the application of polarized neutrons, was checked by incorporating them into our reverse Monte Carlo calculations. We also performed Monte Carlo simulations of a hard sphere system, which provides an accurate estimate of the information content of the measured data. It is shown that the present combination of polarized neutron scattering and reverse Monte Carlo structural modelling is a promising approach towards a detailed understanding of the microscopic structure of water.
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
Path integral Monte Carlo ground state approach: formalism, implementation, and applications
NASA Astrophysics Data System (ADS)
Yan, Yangqian; Blume, D.
2017-11-01
Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.
Lourenço, Ana; Thomas, Russell; Bouchard, Hugo; Kacperek, Andrzej; Vondracek, Vladimir; Royle, Gary; Palmans, Hugo
2016-07-01
The aim of this study was to determine fluence corrections necessary to convert absorbed dose to graphite, measured by graphite calorimetry, to absorbed dose to water. Fluence corrections were obtained from experiments and Monte Carlo simulations in low- and high-energy proton beams. Fluence corrections were calculated to account for the difference in fluence between water and graphite at equivalent depths. Measurements were performed with narrow proton beams. Plane-parallel-plate ionization chambers with a large collecting area compared to the beam diameter were used to intercept the whole beam. High- and low-energy proton beams were provided by a scanning and double scattering delivery system, respectively. A mathematical formalism was established to relate fluence corrections derived from Monte Carlo simulations, using the fluka code [A. Ferrari et al., "fluka: A multi-particle transport code," in CERN 2005-10, INFN/TC 05/11, SLAC-R-773 (2005) and T. T. Böhlen et al., "The fluka Code: Developments and challenges for high energy and medical applications," Nucl. Data Sheets 120, 211-214 (2014)], to partial fluence corrections measured experimentally. A good agreement was found between the partial fluence corrections derived by Monte Carlo simulations and those determined experimentally. For a high-energy beam of 180 MeV, the fluence corrections from Monte Carlo simulations were found to increase from 0.99 to 1.04 with depth. In the case of a low-energy beam of 60 MeV, the magnitude of fluence corrections was approximately 0.99 at all depths when calculated in the sensitive area of the chamber used in the experiments. Fluence correction calculations were also performed for a larger area and found to increase from 0.99 at the surface to 1.01 at greater depths. Fluence corrections obtained experimentally are partial fluence corrections because they account for differences in the primary and part of the secondary particle fluence. A correction factor, F(d), has been established to relate fluence corrections defined theoretically to partial fluence corrections derived experimentally. The findings presented here are also relevant to water and tissue-equivalent-plastic materials given their carbon content.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Gao, Y
Purpose: Dynamic bowtie filter is an innovative design capable of modulating the X-ray and balancing the flux in the detectors, and it introduces a new way of patient-specific CT scan optimizations. This study demonstrates the feasibility of performing fast Monte Carlo dose calculation for a type of dynamic bowtie filter for cone-beam CT (Liu et al. 2014 9(7) PloS one) using MIC coprocessors. Methods: The dynamic bowtie filter in question consists of a highly attenuating bowtie component (HB) and a weakly attenuating bowtie (WB). The HB is filled with CeCl3 solution and its surface is defined by a transcendental equation.more » The WB is an elliptical cylinder filled with air and immersed in the HB. As the scanner rotates, the orientation of WB remains the same with the static patient. In our Monte Carlo simulation, the HB was approximated by 576 boxes. The phantom was a voxelized elliptical cylinder composed of PMMA and surrounded by air (44cm×44cm×40cm, 1000×1000×1 voxels). The dose to the PMMA phantom was tallied with 0.15% statistical uncertainty under 100 kVp source. Two Monte Carlo codes ARCHER and MCNP-6.1 were compared. Both used double-precision. Compiler flags that may trade accuracy for speed were avoided. Results: The wall time of the simulation was 25.4 seconds by ARCHER on a 5110P MIC, 40 seconds on a X5650 CPU, and 523 seconds by the multithreaded MCNP on the same CPU. The high performance of ARCHER is attributed to the parameterized geometry and vectorization of the program hotspots. Conclusion: The dynamic bowtie filter modeled in this study is able to effectively reduce the dynamic range of the detected signals for the photon-counting detectors. With appropriate software optimization methods, the accelerator-based (MIC and GPU) Monte Carlo dose engines have shown good performance and can contribute to patient-specific CT scan optimizations.« less
Takada, Kenta; Kumada, Hiroaki; Liem, Peng Hong; Sakurai, Hideyuki; Sakae, Takeji
2016-12-01
We simulated the effect of patient displacement on organ doses in boron neutron capture therapy (BNCT). In addition, we developed a faster calculation algorithm (NCT high-speed) to simulate irradiation more efficiently. We simulated dose evaluation for the standard irradiation position (reference position) using a head phantom. Cases were assumed where the patient body is shifted in lateral directions compared to the reference position, as well as in the direction away from the irradiation aperture. For three groups of neutron (thermal, epithermal, and fast), flux distribution using NCT high-speed with a voxelized homogeneous phantom was calculated. The three groups of neutron fluxes were calculated for the same conditions with Monte Carlo code. These calculated results were compared. In the evaluations of body movements, there were no significant differences even with shifting up to 9mm in the lateral directions. However, the dose decreased by about 10% with shifts of 9mm in a direction away from the irradiation aperture. When comparing both calculations in the phantom surface up to 3cm, the maximum differences between the fluxes calculated by NCT high-speed with those calculated by Monte Carlo code for thermal neutrons and epithermal neutrons were 10% and 18%, respectively. The time required for NCT high-speed code was about 1/10th compared to Monte Carlo calculation. In the evaluation, the longitudinal displacement has a considerable effect on the organ doses. We also achieved faster calculation of depth distribution of thermal neutron flux using NCT high-speed calculation code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms
Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Self-Learning Monte Carlo Method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.
Fixed forced detection for fast SPECT Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.
2018-03-01
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
Fixed forced detection for fast SPECT Monte-Carlo simulation.
Cajgfinger, T; Rit, S; Létang, J M; Halty, A; Sarrut, D
2018-03-02
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Monte Carlo simulation: Its status and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murtha, J.A.
1997-04-01
Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
THE MOVEMENT OF OIL UNDER NON-BREAKING WAVES
The combined effects of wave kinematics, turbulent diffusion, and buoyancy on the transport of oil droplets at sea were investigated in this work using random walk techniques in a Monte Carlo framework. Six hundred oil particles were placed at the water surface and tracked for 5...
Parallel tempering Monte Carlo simulations of lysozyme orientation on charged surfaces
NASA Astrophysics Data System (ADS)
Xie, Yun; Zhou, Jian; Jiang, Shaoyi
2010-02-01
In this work, the parallel tempering Monte Carlo (PTMC) algorithm is applied to accurately and efficiently identify the global-minimum-energy orientation of a protein adsorbed on a surface in a single simulation. When applying the PTMC method to simulate lysozyme orientation on charged surfaces, it is found that lysozyme could easily be adsorbed on negatively charged surfaces with "side-on" and "back-on" orientations. When driven by dominant electrostatic interactions, lysozyme tends to be adsorbed on negatively charged surfaces with the side-on orientation for which the active site of lysozyme faces sideways. The side-on orientation agrees well with the experimental results where the adsorbed orientation of lysozyme is determined by electrostatic interactions. As the contribution from van der Waals interactions gradually dominates, the back-on orientation becomes the preferred one. For this orientation, the active site of lysozyme faces outward, which conforms to the experimental results where the orientation of adsorbed lysozyme is co-determined by electrostatic interactions and van der Waals interactions. It is also found that despite of its net positive charge, lysozyme could be adsorbed on positively charged surfaces with both "end-on" and back-on orientations owing to the nonuniform charge distribution over lysozyme surface and the screening effect from ions in solution. The PTMC simulation method provides a way to determine the preferred orientation of proteins on surfaces for biosensor and biomaterial applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeo, Sang Chul; Lee, Hyuck Mo, E-mail: hmlee@kaist.ac.kr; Lo, Yu Chieh
2014-10-07
Ammonia (NH{sub 3}) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (E{sub b}) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (E{sub b}) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH{sub 3} nitridation rate onmore » the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH{sub 3} nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH{sub 3} nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH{sub 3} nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.« less
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; ...
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
A Monte Carlo simulation study of associated liquid crystals
NASA Astrophysics Data System (ADS)
Berardi, R.; Fehervari, M.; Zannoni, C.
We have performed a Monte Carlo simulation study of a system of ellipsoidal particles with donor-acceptor sites modelling complementary hydrogen-bonding groups in real molecules. We have considered elongated Gay-Berne particles with terminal interaction sites allowing particles to associate and form dimers. The changes in the phase transitions and in the molecular organization and the interplay between orientational ordering and dimer formation are discussed. Particle flip and dimer moves have been used to increase the convergency rate of the Monte Carlo (MC) Markov chain.
PEPSI — a Monte Carlo generator for polarized leptoproduction
NASA Astrophysics Data System (ADS)
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
NUEN-618 Class Project: Actually Implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, R. M.; Brunner, T. A.
2017-12-14
This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.
NASA Astrophysics Data System (ADS)
Mei, Donghai; Ge, Qingfeng; Neurock, Matthew; Kieken, Laurent; Lerou, Jan
First-principles-based kinetic Monte Carlo simulation was used to track the elementary surface transformations involved in the catalytic decomposition of NO over Pt(100) and Rh(100) surfaces under lean-burn operating conditions. Density functional theory (DFT) calculations were carried out to establish the structure and energetics for all reactants, intermediates and products over Pt(100) and Rh(100). Lateral interactions which arise from neighbouring adsorbates were calculated by examining changes in the binding energies as a function of coverage and different coadsorbed configurations. These data were fitted to a bond order conservation (BOC) model which is subsequently used to establish the effects of coverage within the simulation. The intrinsic activation barriers for all the elementary reaction steps in the proposed mechanism of NO reduction over Pt(100) were calculated by using DFT. These values are corrected for coverage effects by using the parametrized BOC model internally within the simulation. This enables a site-explicit kinetic Monte Carlo simulation that can follow the kinetics of NO decomposition over Pt(100) and Rh(100) in the presence of excess oxygen. The simulations are used here to model various experimental protocols including temperature programmed desorption as well as batch catalytic kinetics. The simulation results for the temperature programmed desorption and decomposition of NO over Pt(100) and Rh(100) under vacuum condition were found to be in very good agreement with experimental results. NO decomposition is strongly tied to the temporal number of sites that remain vacant. Experimental results show that Pt is active in the catalytic reaction of NO into N2 and NO2 under lean-burn conditions. The simulated reaction orders for NO and O2 were found to be +0.9 and -0.4 at 723 K, respectively. The simulation also indicates that there is no activity over Rh(100) since the surface becomes poisoned by oxygen.
Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz
2004-02-01
External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y).
Nicolucci, P; Schuch, F
2012-06-01
To use the Monte Carlo code PENELOPE to study attenuation and tissue equivalence properties of a-Al2O3:C for OSL dosimetry. Mass attenuation coefficients of α-Al2O3 and α-Al2O3:C with carbon percent weight concentrations from 1% to 150% were simulated with PENELOPE Monte Carlo code and compared to mass attenuation coefficients from soft tissue for photon beams ranging from 50kV to 10MV. Also, the attenuation of primary photon beams of 6MV and 10MV and the generation of secondary electrons by α-Al2O3 :C dosimeters positioned on the entrance surface of a water phantom were studied. A difference of up to 90% was found in the mass attenuation coefficient between the pure \\agr;-A12O3 and the material with 150% weight concentration of dopant at 1.5 keV, corresponding to the K-edge photoelectric absorption of aluminum. However for energies above 80 keV the concentration of carbon does not affect the mass attenuation coefficient and the material presents tissue equivalence for the beams studied. The ratio between the mass attenuation coefficients for \\agr-A12O3:C and for soft tissue are less than unit due to the higher density of the \\agr-A12O3 (2.12 g/cm s ) and its tissue equivalence diminishes to lower concentrations of carbon and for lower energies due to the relation of the radiation interaction effects with atomic number. The larger attenuation of the primary photon beams by the dosimeter was 16% at 250 keV and the maximum increase in secondary electrons fluence to the entrance surface of the phantom was found as 91% at 2MeV. The use of the OSL dosimeters in radiation therapy can be optimized by use of PENELOPE Monte Carlo simulation to provide a study of the attenuation and response characteristics of the material. © 2012 American Association of Physicists in Medicine.
Migration of Carbon Adatoms on the Surface of Charged SWCNT
NASA Astrophysics Data System (ADS)
Han, Longtao; Krstic, Predrag; Kaganovich, Igor
2016-10-01
In volume plasma, the growth of SWCNT from a transition metal catalyst could be enhanced by incoming carbon flux on SWCNT surface, which is generated by the adsorption and migration of carbon adatoms on SWCNT surface. In addition, the nanotube can be charged by the irradiation of plasma particles. How this charging effect will influence the adsorption and migration behavior of carbon atom has not been revealed. Using Density Functional Theory, Nudged Elastic Band and Kinetic Monte Carlo method, we found equilibrium sites, vibrational frequency, adsorption energy, most probable pathways for migration of adatoms, and the barrier sizes along these pathways. The metallic (5,5) SWCNT can support a fast migration of the carbon adatom along a straight path with low barriers, which is further enhanced by the presence of negative charge on SWCNT. The enhancement is contributed by the higher adsorption energy and thence longer lifetime of adatom on the charged SWCNT surface. The lifetime and migration distance of adatom increase by three and two orders of magnitude, respectively, as shown by Kinetic Monte Carlo simulation. These results support the surface migration mechanism of SWCNT growth in plasma environment. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division.
Zoller, Christian Johannes; Hohmann, Ansgar; Foschum, Florian; Geiger, Simeon; Geiger, Martin; Ertl, Thomas Peter; Kienle, Alwin
2018-06-01
A GPU-based Monte Carlo software (MCtet) was developed to calculate the light propagation in arbitrarily shaped objects, like a human tooth, represented by a tetrahedral mesh. A unique feature of MCtet is a concept to realize different kinds of light-sources illuminating the complex-shaped surface of an object, for which no preprocessing step is needed. With this concept, it is also possible to consider photons leaving a turbid media and reentering again in case of a concave object. The correct implementation was shown by comparison with five other Monte Carlo software packages. A hundredfold acceleration compared with central processing units-based programs was found. MCtet can simulate anisotropic light propagation, e.g., by accounting for scattering at cylindrical structures. The important influence of the anisotropic light propagation, caused, e.g., by the tubules in human dentin, is shown for the transmission spectrum through a tooth. It was found that the sensitivity to a change in the oxygen saturation inside the pulp for transmission spectra is much larger if the tubules are considered. Another "light guiding" effect based on a combination of a low scattering and a high refractive index in enamel is described. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface.
Kosmidis, Kosmas; Argyrakis, Panos; Macheras, Panos
2003-07-01
To verify the Higuchi law and study the drug release from cylindrical and spherical matrices by means of Monte Carlo computer simulation. A one-dimensional matrix, based on the theoretical assumptions of the derivation of the Higuchi law, was simulated and its time evolution was monitored. Cylindrical and spherical three-dimensional lattices were simulated with sites at the boundary of the lattice having been denoted as leak sites. Particles were allowed to move inside it using the random walk model. Excluded volume interactions between the particles was assumed. We have monitored the system time evolution for different lattice sizes and different initial particle concentrations. The Higuchi law was verified using the Monte Carlo technique in a one-dimensional lattice. It was found that Fickian drug release from cylindrical matrices can be approximated nicely with the Weibull function. A simple linear relation between the Weibull function parameters and the specific surface of the system was found. Drug release from a matrix, as a result of a diffusion process assuming excluded volume interactions between the drug molecules, can be described using a Weibull function. This model, although approximate and semiempirical, has the benefit of providing a simple physical connection between the model parameters and the system geometry, which was something missing from other semiempirical models.
Kerisit, Sebastien; Pierce, Eric M.; Ryan, Joseph V.
2014-09-19
Borosilicate nuclear waste glasses develop complex altered layers as a result of coupled processes such as hydrolysis of network species, condensation of Si species, and diffusion. However, diffusion has often been overlooked in Monte Carlo models of the aqueous corrosion of borosilicate glasses. Therefore, in this paper three different models for dissolved Si diffusion in the altered layer were implemented in a Monte Carlo model and evaluated for glasses in the compositional range (75 - x) mol% SiO 2 (12.5 + x/2) mol% B 2O 3 and (12.5 + x/2) mol% Na 2O, where 0 ≤ x ≤ 20%, andmore » corroded in static conditions at a surface-area-to-volume ratio of 1000 m -1. The three models considered instantaneous homogenization (M1), linear concentration gradients (M2), and concentration profiles determined by solving Fick's 2nd law using a finite difference method (M3). Model M3 revealed that concentration profiles in the altered layer are not linear and show changes in shape and magnitude as corrosion progresses, unlike those assumed in model M2. Furthermore, model M3 showed that, for borosilicate glasses with a high forward dissolution rate compared to the diffusion rate, the gradual polymerization and densification of the altered layer is significantly delayed compared to models M1 and M2. Finally, models M1 and M2 were found to be appropriate models only for glasses with high release rates such as simple borosilicate glasses with low ZrO 2 content.« less
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Monte Carlo simulation of proton track structure in biological matter
Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.; ...
2017-05-25
Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less
Monte Carlo simulation of proton track structure in biological matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinto, Michele A.; Monti, Juan M.; Weck, Philippe F.
Here, understanding the radiation-induced effects at the cellular and subcellular levels remains crucial for predicting the evolution of irradiated biological matter. In this context, Monte Carlo track-structure simulations have rapidly emerged among the most suitable and powerful tools. However, most existing Monte Carlo track-structure codes rely heavily on the use of semi-empirical cross sections as well as water as a surrogate for biological matter. In the current work, we report on the up-to-date version of our homemade Monte Carlo code TILDA-V – devoted to the modeling of the slowing-down of 10 keV–100 MeV protons in both water and DNA –more » where the main collisional processes are described by means of an extensive set of ab initio differential and total cross sections.« less
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media
NASA Astrophysics Data System (ADS)
Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique
2017-08-01
NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.
NASA Astrophysics Data System (ADS)
Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh
2014-06-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardiansyah, D.; Haryanto, F.; Male, S.
2014-09-30
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file ismore » used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.« less
Monte Carlo Computational Modeling of Atomic Oxygen Interactions
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Stueber, Thomas J.; Miller, Sharon K.; De Groh, Kim K.
2017-01-01
Computational modeling of the erosion of polymers caused by atomic oxygen in low Earth orbit (LEO) is useful for determining areas of concern for spacecraft environment durability. Successful modeling requires that the characteristics of the environment such as atomic oxygen energy distribution, flux, and angular distribution be properly represented in the model. Thus whether the atomic oxygen is arriving normal to or inclined to a surface and whether it arrives in a consistent direction or is sweeping across the surface such as in the case of polymeric solar array blankets is important to determine durability. When atomic oxygen impacts a polymer surface it can react removing a certain volume per incident atom (called the erosion yield), recombine, or be ejected as an active oxygen atom to potentially either react with other polymer atoms or exit into space. Scattered atoms can also have a lower energy as a result of partial or total thermal accommodation. Many solutions to polymer durability in LEO involve protective thin films of metal oxides such as SiO2 to prevent atomic oxygen erosion. Such protective films also have their own interaction characteristics. A Monte Carlo computational model has been developed which takes into account the various types of atomic oxygen arrival and how it reacts with a representative polymer (polyimide Kapton H) and how it reacts at defect sites in an oxide protective coating, such as SiO2 on that polymer. Although this model was initially intended to determine atomic oxygen erosion behavior at defect sites for the International Space Station solar arrays, it has been used to predict atomic oxygen erosion or oxidation behavior on many other spacecraft components including erosion of polymeric joints, durability of solar array blanket box covers, and scattering of atomic oxygen into telescopes and microwave cavities where oxidation of critical component surfaces can take place. The computational model is a two dimensional model which has the capability to tune the interactions of how the atomic oxygen reacts, scatters, or recombines on polymer or nonreactive surfaces. In addition to the specification of atomic oxygen arrival details, a total of 15 atomic oxygen interaction parameters have been identified as necessary to properly simulate observed interactions and resulting polymer erosion that have been observed in LEO. The tuning of the Monte Carlo model has been accomplished by adjusting interaction parameters so the erosion patterns produced by the model match those from several actual LEO space experiments. Surface texturing in LEO can also be predicted by the model. Such comparison of space tests with ground laboratory experiments have enabled confidence in ground laboratory lifetime prediction of protected polymers. Results of Monte Carlo tuning, examples of surface texturing and undercutting erosion prediction, and several examples of how the model can be used to predict other LEO and Mars orbital space results are presented.
Markov Chain Monte Carlo Bayesian Learning for Neural Networks
NASA Technical Reports Server (NTRS)
Goodrich, Michael S.
2011-01-01
Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.
Monte Carlo simulations of nematic and chiral nematic shells
NASA Astrophysics Data System (ADS)
Wand, Charlie R.; Bates, Martin A.
2015-01-01
We present a systematic Monte Carlo simulation study of thin nematic and cholesteric shells with planar anchoring using an off-lattice model. The results obtained using the simple model correspond with previously published results for lattice-based systems, with the number, type, and position of defects observed dependent on the shell thickness with four half-strength defects in a tetrahedral arrangement found in very thin shells and a pair of defects in a bipolar (boojum) configuration observed in thicker shells. A third intermediate defect configuration is occasionally observed for intermediate thickness shells, which is stabilized in noncentrosymmetric shells of nonuniform thickness. Chiral nematic (cholesteric) shells are investigated by including a chiral term in the potential. Decreasing the pitch of the chiral nematic leads to a twisted bipolar (chiral boojum) configuration with the director twist increasing from the inner to the outer surface.
Radiance and polarization of multiple scattered light from haze and clouds.
Kattawar, G W; Plass, G N
1968-08-01
The radiance and polarization of multiple scattered light is calculated from the Stokes' vectors by a Monte Carlo method. The exact scattering matrix for a typical haze and for a cloud whose spherical drops have an average radius of 12 mu is calculated from the Mie theory. The Stokes' vector is transformed in a collision by this scattering matrix and the rotation matrix. The two angles that define the photon direction after scattering are chosen by a random process that correctly simulates the actual distribution functions for both angles. The Monte Carlo results for Rayleigh scattering compare favorably with well known tabulated results. Curves are given of the reflected and transmitted radiances and polarizations for both the haze and cloud models and for several solar angles, optical thicknesses, and surface albedos. The dependence on these various parameters is discussed.
Monte Carlo simulation of light reflection from cosmetic powders on the skin
NASA Astrophysics Data System (ADS)
Okamoto, Takashi; Motoda, Masafumi; Igarashi, Takanori; Nakao, Keisuke
2011-07-01
The reflection and scattering properties of light incident on skin covered with powder particles have been investigated. A three-layer skin structure with a spot is modeled, and the propagation of light in the skin and the scattering of light by particles on the skin surface are simulated by means of a Monte Carlo method. Under the condition in which only single scattering of light occurs in the powder layer, the reflection spectra of light from the skin change dramatically with the size of powder particles. The color difference between normal skin and spots is found to diminish more when powder particles smaller than the wavelength of light are used. It is shown that particle polydispersity suppresses substantially the extreme spectral change caused by monodisperse particles with a size comparable to the light wavelength.
NASA Technical Reports Server (NTRS)
Polig, E.; Jee, W. S.; Kruglikov, I. L.
1992-01-01
Factors relating the local concentration of a bone-seeking alpha-particle emitter to the mean hit rate have been determined for nuclei of bone lining cells using a Monte Carlo procedure. Cell nuclei were approximated by oblate spheroids with dimensions and location taken from a previous histomorphometric study. The Monte Carlo simulation is applicable for planar and diffuse labels at plane or cylindrical bone surfaces. Additionally, the mean nuclear dose per hit, the dose mean per hit, the mean track segment length and its second moment, the percentage of stoppers, and the frequency distribution of the dose have been determined. Some basic features of the hit statistics for bone lining cells have been outlined, and the consequences of existing standards of radiation protection with regard to the hit frequency to cell nuclei are discussed.
NASA Astrophysics Data System (ADS)
Gelß, Patrick; Matera, Sebastian; Schütte, Christof
2016-06-01
In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.
Monte Carlo modeling of fluorescence in semi-infinite turbid media
NASA Astrophysics Data System (ADS)
Ong, Yi Hong; Finlay, Jarod C.; Zhu, Timothy C.
2018-02-01
The incident field size and the interplay of absorption and scattering can influence the in-vivo light fluence rate distribution and complicate the absolute quantification of fluorophore concentration in-vivo. In this study, we use Monte Carlo simulations to evaluate the effect of incident beam radius and optical properties to the fluorescence signal collected by isotropic detector placed on the tissue surface. The optical properties at the excitation and emission wavelengths are assumed to be identical. We compute correction factors to correct the fluorescence intensity for variations due to incident field size and optical properties. The correction factors are fitted to a 4-parameters empirical correction function and the changes in each parameter are compared for various beam radius over a range of physiologically relevant tissue optical properties (μa = 0.1 - 1 cm-1 , μs'= 5 - 40 cm-1 ).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Monte Carlo: in the beginning and some great expectations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metropolis, N.
1985-01-01
The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conferencemore » was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.« less
Calculating Potential Energy Curves with Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2014-06-01
Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Criticality Calculations with MCNP6 - Practical Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
Mosaicing of airborne LiDAR bathymetry strips based on Monte Carlo matching
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Su, Dianpeng; Zhang, Kai; Ma, Yue; Wang, Mingwei; Yang, Anxiu
2017-09-01
This study proposes a new methodology for mosaicing airborne light detection and ranging (LiDAR) bathymetry (ALB) data based on Monte Carlo matching. Various errors occur in ALB data due to imperfect system integration and other interference factors. To account for these errors, a Monte Carlo matching algorithm based on a nonlinear least-squares adjustment model is proposed. First, the raw data of strip overlap areas were filtered according to their relative drift of depths. Second, a Monte Carlo model and nonlinear least-squares adjustment model were combined to obtain seven transformation parameters. Then, the multibeam bathymetric data were used to correct the initial strip during strip mosaicing. Finally, to evaluate the proposed method, the experimental results were compared with the results of the Iterative Closest Points (ICP) and three-dimensional Normal Distributions Transform (3D-NDT) algorithms. The results demonstrate that the algorithm proposed in this study is more robust and effective. When the quality of the raw data is poor, the Monte Carlo matching algorithm can still achieve centimeter-level accuracy for overlapping areas, which meets the accuracy of bathymetry required by IHO Standards for Hydrographic Surveys Special Publication No.44.
Wang, Lei; Troyer, Matthias
2014-09-12
We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice.
Monte Carlo Simulation of a Segmented Detector for Low-Energy Electron Antineutrinos
NASA Astrophysics Data System (ADS)
Qomi, H. Akhtari; Safari, M. J.; Davani, F. Abbasi
2017-11-01
Detection of low-energy electron antineutrinos is of importance for several purposes, such as ex-vessel reactor monitoring, neutrino oscillation studies, etc. The inverse beta decay (IBD) is the interaction that is responsible for detection mechanism in (organic) plastic scintillation detectors. Here, a detailed study will be presented dealing with the radiation and optical transport simulation of a typical segmented antineutrino detector withMonte Carlo method using MCNPX and FLUKA codes. This study shows different aspects of the detector, benefiting from inherent capabilities of the Monte Carlo simulation codes.
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G
2006-01-01
The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.
2016-04-01
noise, and energy relaxation for doped zinc-oxide and structured ZnO transistor materials with a 2-D electron gas (2DEG) channel subjected to a strong...function on the time delay. Closed symbols represent the Monte Carlo data with hot-phonon effect at different electron gas density: 1•1017 cm-3...Monte Carlo simulation is performed for electron gas density of 1•1018 cm-3. Figure 18. Monte Carlo simulation of density-dependent hot-electron energy
Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bates, Cameron Russell; Mckigney, Edward Allen
The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.
Proceedings of the Nuclear Criticality Technology Safety Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rene G. Sanchez
1998-04-01
This document contains summaries of most of the papers presented at the 1995 Nuclear Criticality Technology Safety Project (NCTSP) meeting, which was held May 16 and 17 at San Diego, Ca. The meeting was broken up into seven sessions, which covered the following topics: (1) Criticality Safety of Project Sapphire; (2) Relevant Experiments For Criticality Safety; (3) Interactions with the Former Soviet Union; (4) Misapplications and Limitations of Monte Carlo Methods Directed Toward Criticality Safety Analyses; (5) Monte Carlo Vulnerabilities of Execution and Interpretation; (6) Monte Carlo Vulnerabilities of Representation; and (7) Benchmark Comparisons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J
Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less
Liu, Y; Zheng, Y
2012-06-01
Accurate determination of proton dosimetric effect for tissue heterogeneity is critical in proton therapy. Proton beams have finite range and consequently tissue heterogeneity plays a more critical role in proton therapy. The purpose of this study is to investigate the tissue heterogeneity effect in proton dosimetry based on anatomical-based Monte Carlo simulation using animal tissues. Animal tissues including a pig head and beef bulk were used in this study. Both pig head and beef were scanned using a GE CT scanner with 1.25 mm slice thickness. A treatment plan was created, using the CMS XiO treatment planning system (TPS) with a single proton spread-out-Bragg-peak beam (SOBP). Radiochromic films were placed at the distal falloff region. Image guidance was used to align the phantom before proton beams were delivered according to the treatment plan. The same two CT sets were converted to Monte Carlo simulation model. The Monte Carlo simulated dose calculations with/without tissue omposition were compared to TPS calculations and measurements. Based on the preliminary comparison, at the center of SOBP plane, the Monte Carlo simulation dose without tissue composition agreed generally well with TPS calculation. In the distal falloff region, the dose difference was large, and about 2 mm isodose line shift was observed with the consideration of tissue composition. The detailed comparison of dose distributions between Monte Carlo simulation, TPS calculations and measurements is underway. Accurate proton dose calculations are challenging in proton treatment planning for heterogeneous tissues. Tissue heterogeneity and tissue composition may lead to isodose line shifts up to a few millimeters in the distal falloff region. By simulating detailed particle transport and energy deposition, Monte Carlo simulations provide a verification method in proton dose calculation where inhomogeneous tissues are present. © 2012 American Association of Physicists in Medicine.
Monte Carlo verification of radiotherapy treatments with CloudMC.
Miras, Hector; Jiménez, Rubén; Perales, Álvaro; Terrón, José Antonio; Bertolet, Alejandro; Ortiz, Antonio; Macías, José
2018-06-27
A new implementation has been made on CloudMC, a cloud-based platform presented in a previous work, in order to provide services for radiotherapy treatment verification by means of Monte Carlo in a fast, easy and economical way. A description of the architecture of the application and the new developments implemented is presented together with the results of the tests carried out to validate its performance. CloudMC has been developed over Microsoft Azure cloud. It is based on a map/reduce implementation for Monte Carlo calculations distribution over a dynamic cluster of virtual machines in order to reduce calculation time. CloudMC has been updated with new methods to read and process the information related to radiotherapy treatment verification: CT image set, treatment plan, structures and dose distribution files in DICOM format. Some tests have been designed in order to determine, for the different tasks, the most suitable type of virtual machines from those available in Azure. Finally, the performance of Monte Carlo verification in CloudMC is studied through three real cases that involve different treatment techniques, linac models and Monte Carlo codes. Considering computational and economic factors, D1_v2 and G1 virtual machines were selected as the default type for the Worker Roles and the Reducer Role respectively. Calculation times up to 33 min and costs of 16 € were achieved for the verification cases presented when a statistical uncertainty below 2% (2σ) was required. The costs were reduced to 3-6 € when uncertainty requirements are relaxed to 4%. Advantages like high computational power, scalability, easy access and pay-per-usage model, make Monte Carlo cloud-based solutions, like the one presented in this work, an important step forward to solve the long-lived problem of truly introducing the Monte Carlo algorithms in the daily routine of the radiotherapy planning process.
Spin-driven structural effects in alkali doped (4)He clusters from quantum calculations.
Bovino, S; Coccia, E; Bodo, E; Lopez-Durán, D; Gianturco, F A
2009-06-14
In this paper, we carry out variational Monte Carlo and diffusion Monte Carlo (DMC) calculations for Li(2)((1)Sigma(g) (+))((4)He)(N) and Li(2)((3)Sigma(u) (+))((4)He)(N) with N up to 30 and discuss in detail the results of our computations. After a comparison between our DMC energies with the "exact" discrete variable representation values for the species with one (4)He, in order to test the quality of our computations at 0 K, we analyze the structural features of the whole range of doped clusters. We find that both species reside on the droplet surface, but that their orientation is spin driven, i.e., the singlet molecule is perpendicular and the triplet one is parallel to the droplet's surface. We have also computed quantum vibrational relaxation rates for both dimers in collision with a single (4)He and we find them to differ by orders of magnitude at the estimated surface temperature. Our results therefore confirm the findings from a great number of experimental data present in the current literature and provide one of the first attempts at giving an accurate, fully quantum picture for the nanoscopic properties of alkali dimers in (4)He clusters.
Diffuse characteristics study of laser target board using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Yang, Pengling; Wu, Yong; Wang, Zhenbao; Tao, Mengmeng; Wu, Junjie; Wang, Ping; Yan, Yan; Zhang, Lei; Feng, Gang; Zhu, Jinghui; Feng, Guobin
2013-05-01
In this paper, Torrance-Sparrow and Oren-Nayar model is adopt to study diffuse characteristics of laser target board. The model which based on geometric optics, assumes that rough surfaces are made up of a series of symmetric V-groove cavities with different slopes at microscopic level. The distribution of the slopes of the V-grooves are modeled as beckman distribution function, and every microfacet of the V-groove cavity is assumed to behave like a perfect mirror, which means the reflected ray follows Fresnel law at the microfacet. The masking and shadowing effects of rough surface are also taken into account through geometric attenuation factor. Monte Carlo method is used to simulate the diffuse reflectance distribution of the laser target board with different materials and processing technology, and all the calculated results are verified by experiment. It is shown that the profile of bidirectional reflectance distribution curve is lobe-shaped with the maximum lies along the mirror reflection direction. The width of the profile is narrower for a lower roughness value, and broader for a higher roughness value. The refractive index of target material will also influence the intensity and distribution of diffuse reflectance of laser target surface.
Simulations of hypersonic, high-enthalpy separated flow over a 'tick' configuration
NASA Astrophysics Data System (ADS)
Moss, J. N.; O'Byrne, S.; Deepak, N. R.; Gai, S. L.
2012-11-01
The effect of slip is investigated in direct simulation Monte Carlo and Navier-Stokes-based computations of the separated flow between an expansion and a following compression surface, a geometry we call the 'tick' configuration. This configuration has been chosen as a test of separated flow with zero initial boundary layer thickness, a flowfield well suited to Chapman's analytical separated flow theories. The predicted size of the separated region is different for the two codes, although both codes meet their respective particle or grid resolution requirements. Unlike previous comparisons involving cylinder flares or double cones, the separation does not occur in a region of elevated density, and is therefore well suited to the direct simulation Monte Carlo method because the effect of slip at the surface is significant. The reasons for the difference between the two calculations are hypothesized to be a combination of significant rarefaction effects near the expansion surface and the non-zero radius of the leading edge. When the leading edge radius is accounted for, the rarefaction effect at the leading edge is less significant and the behavior of the flowfields predicted by the two methods becomes more similar.
Barragán, Patricia; Pérez de Tudela, Ricardo; Qu, Chen; Prosmiti, Rita; Bowman, Joel M
2013-07-14
Diffusion Monte Carlo (DMC) and path-integral Monte Carlo computations of the vibrational ground state and 10 K equilibrium state properties of the H7 (+)/D7 (+) cations are presented, using an ab initio full-dimensional potential energy surface. The DMC zero-point energies of dissociated fragments H5 (+)(D5 (+))+H2(D2) are also calculated and from these results and the electronic dissociation energy, dissociation energies, D0, of 752 ± 15 and 980 ± 14 cm(-1) are reported for H7 (+) and D7 (+), respectively. Due to the known error in the electronic dissociation energy of the potential surface, these quantities are underestimated by roughly 65 cm(-1). These values are rigorously determined for first time, and compared with previous theoretical estimates from electronic structure calculations using standard harmonic analysis, and available experimental measurements. Probability density distributions are also computed for the ground vibrational and 10 K state of H7 (+) and D7 (+). These are qualitatively described as a central H3 (+)/D3 (+) core surrounded by "solvent" H2/D2 molecules that nearly freely rotate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byun, H. S.; Pirbadian, S.; Nakano, Aiichiro
2014-09-05
Microorganisms overcome the considerable hurdle of respiring extracellular solid substrates by deploying large multiheme cytochrome complexes that form 20 nanometer conduits to traffic electrons through the periplasm and across the cellular outer membrane. Here we report the first kinetic Monte Carlo simulations and single-molecule scanning tunneling microscopy (STM) measurements of the Shewanella oneidensis MR-1 outer membrane decaheme cytochrome MtrF, which can perform the final electron transfer step from cells to minerals and microbial fuel cell anodes. We find that the calculated electron transport rate through MtrF is consistent with previously reported in vitro measurements of the Shewanella Mtr complex, asmore » well as in vivo respiration rates on electrode surfaces assuming a reasonable (experimentally verified) coverage of cytochromes on the cell surface. The simulations also reveal a rich phase diagram in the overall electron occupation density of the hemes as a function of electron injection and ejection rates. Single molecule tunneling spectroscopy confirms MtrF's ability to mediate electron transport between an STM tip and an underlying Au(111) surface, but at rates higher than expected from previously calculated heme-heme electron transfer rates for solvated molecules.« less
Thermodynamics of coupled protein adsorption and stability using hybrid Monte Carlo simulations.
Zhong, Ellen D; Shirts, Michael R
2014-05-06
A better understanding of changes in protein stability upon adsorption can improve the design of protein separation processes. In this study, we examine the coupling of the folding and the adsorption of a model protein, the B1 domain of streptococcal protein G, as a function of surface attraction using a hybrid Monte Carlo (HMC) approach with temperature replica exchange and umbrella sampling. In our HMC implementation, we are able to use a molecular dynamics (MD) time step that is an order of magnitude larger than in a traditional MD simulation protocol and observe a factor of 2 enhancement in the folding and unfolding rate. To demonstrate the convergence of our systems, we measure the travel of our order parameter the fraction of native contacts between folded and unfolded states throughout the length of our simulations. Thermodynamic quantities are extracted with minimum statistical variance using multistate reweighting between simulations at different temperatures and harmonic distance restraints from the surface. The resultant free energies, enthalpies, and entropies of the coupled unfolding and absorption processes are in qualitative agreement with previous experimental and computational observations, including entropic stabilization of the adsorbed, folded state relative to the bulk on surfaces with low attraction.
Simulation of Nuclear Reactor Kinetics by the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gomin, E. A.; Davidenko, V. D.; Zinchenko, A. S.; Kharchenko, I. K.
2017-12-01
The KIR computer code intended for calculations of nuclear reactor kinetics using the Monte Carlo method is described. The algorithm implemented in the code is described in detail. Some results of test calculations are given.
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Jiang, Yi-fan; Chen, Chang-shui; Liu, Xiao-mei; Liu, Rong-ting; Liu, Song-hao
2015-04-01
To explore the characteristics of light propagation along the Pericardium Meridian and its surrounding areas at human wrist by using optical experiment and Monte Carlo method. An experiment was carried out to obtain the distribution of diffuse light on Pericardium Meridian line and its surrounding areas at the wrist, and then a simplified model based on the anatomical structure was proposed to simulate the light transportation within the same area by using Monte Carlo method. The experimental results showed strong accordance with the Monte Carlo simulation that the light propagation along the Pericardium Meridian had an advantage over its surrounding areas at the wrist. The advantage of light transport along Pericardium Merdian line was related to components and structure of tissue, also the anatomical structure of the area that the Pericardium Meridian line runs.
Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro
2015-01-01
Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography.
Event-chain Monte Carlo algorithms for three- and many-particle interactions
NASA Astrophysics Data System (ADS)
Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.
2017-02-01
We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.
1992-01-01
Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.
Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma
NASA Astrophysics Data System (ADS)
Seibert, Stanley; Latorre, Anthony
2012-03-01
We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.
NASA Astrophysics Data System (ADS)
Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane
2017-07-01
We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.
Geometry and Dynamics for Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Barp, Alessandro; Briol, François-Xavier; Kennedy, Anthony D.; Girolami, Mark
2018-03-01
Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the intuitions and knowledge of users of the methodology and our deep understanding of these theoretical foundations. The aim of this review is to provide a comprehensive introduction to the geometric tools used in Hamiltonian Monte Carlo at a level accessible to statisticians, machine learners and other users of the methodology with only a basic understanding of Monte Carlo methods. This will be complemented with some discussion of the most recent advances in the field which we believe will become increasingly relevant to applied scientists.
Monte Carlo tests of the ELIPGRID-PC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro
2015-01-01
Objective Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography. PMID:26811553
Two proposed convergence criteria for Monte Carlo solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Pederson, S.P.; Booth, T.E.
1992-01-01
The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less
Surface Segregation in Ternary Alloys
NASA Technical Reports Server (NTRS)
Good, Brian; Bozzolo, Guillermo H.; Abel, Phillip B.
2000-01-01
Surface segregation profiles of binary (Cu-Ni, Au-Ni, Cu-Au) and ternary (Cu-Au-Ni) alloys are determined via Monte Carlo-Metropolis computer simulations using the BFS method for alloys for the calculation of the energetics. The behavior of Cu or Au in Ni is contrasted with their behavior when both are present. The interaction between Cu and Au and its effect on the segregation profiles for Cu-Au-Ni alloys is discussed.
NASA Astrophysics Data System (ADS)
Spezi, Emiliano
2010-08-01
Sixty years after the paper 'The Monte Carlo method' by N Metropolis and S Ulam in The Journal of the American Statistical Association (Metropolis and Ulam 1949), use of the most accurate algorithm for computer modelling of radiotherapy linear accelerators, radiation detectors and three dimensional patient dose was discussed in Wales (UK). The Second European Workshop on Monte Carlo Treatment Planning (MCTP2009) was held at the National Museum of Wales in Cardiff. The event, organized by Velindre NHS Trust, Cardiff University and Cancer Research Wales, lasted two and a half days, during which leading experts and contributing authors presented and discussed the latest advances in the field of Monte Carlo treatment planning (MCTP). MCTP2009 was highly successful, judging from the number of participants which was in excess of 140. Of the attendees, 24% came from the UK, 46% from the rest of Europe, 12% from North America and 18% from the rest of the World. Fifty-three oral presentations and 24 posters were delivered in a total of 12 scientific sessions. MCTP2009 follows the success of previous similar initiatives (Verhaegen and Seuntjens 2005, Reynaert 2007, Verhaegen and Seuntjens 2008), and confirms the high level of interest in Monte Carlo technology for radiotherapy treatment planning. The 13 articles selected for this special section (following Physics in Medicine and Biology's usual rigorous peer-review procedure) give a good picture of the high quality of the work presented at MCTP2009. The book of abstracts can be downloaded from http://www.mctp2009.org. I wish to thank the IOP Medical Physics and Computational Physics Groups for their financial support, Elekta Ltd and Dosisoft for sponsoring MCTP2009, and leading manufacturers such as BrainLab, Nucletron and Varian for showcasing their latest MC-based radiotherapy solutions during a dedicated technical session. I am also very grateful to the eight invited speakers who kindly accepted to give keynote presentations which contributed significantly to raising the quality of the event and capturing the interest of the medical physics community. I also wish to thank all those who contributed to the success of MCTP2009: the members of the local Organizing Committee and the Workshop Management Team who managed the event very efficiently, the members of the European Working Group in Monte Carlo Treatment Planning (EWG-MCTP) who acted as Guest Associate Editors for the MCTP2009 abstracts reviewing process, and all the authors who generated new, high quality work. Finally, I hope that you find the contents of this special section enjoyable and informative. Emiliano Spezi Chairman of MCTP2009 Organizing Committee and Guest Editor References Metropolis N and Ulam S 1949 The Monte Carlo method J. Amer. Stat. Assoc. 44 335-41 Reynaert N 2007 First European Workshop on Monte Carlo Treatment Planning J. Phys.: Conf. Ser. 74 011001 Verhaegen F and Seuntjens J 2005 International Workshop on Current Topics in Monte Carlo Treatment Planning Phys. Med. Biol. 50 Verhaegen F and Seuntjens J 2008 International Workshop on Monte Carlo Techniques in Radiotherapy Delivery and Verification J. Phys.: Conf. Ser. 102 011001
Height of a faceted macrostep for sticky steps in a step-faceting zone
NASA Astrophysics Data System (ADS)
Akutsu, Noriko
2018-02-01
The driving force dependence of the surface velocity and the average height of faceted merged steps, the terrace-surface slope, and the elementary step velocity are studied using the Monte Carlo method in the nonequilibrium steady state. The Monte Carlo study is based on a lattice model, the restricted solid-on-solid model with point-contact-type step-step attraction (p-RSOS model). The main focus of this paper is a change of the "kink density" on the vicinal surface. The temperature is selected to be in the step-faceting zone [N. Akutsu, AIP Adv. 6, 035301 (2016), 10.1063/1.4943400] where the vicinal surface is surrounded by the (001) terrace and the (111) faceted step at equilibrium. Long time simulations are performed at this temperature to obtain steady states for the different driving forces that influence the growth/recession of the surface. A Wulff figure of the p-RSOS model is produced through the anomalous surface tension calculated using the density-matrix renormalization group method. The characteristics of the faceted macrostep profile at equilibrium are classified with respect to the connectivity of the surface tension. This surface tension connectivity also leads to a faceting diagram, where the separated areas are, respectively, classified as a Gruber-Mullins-Pokrovsky-Talapov zone, step droplet zone, and step-faceting zone. Although the p-RSOS model is a simplified model, the model shows a wide variety of dynamics in the step-faceting zone. There are four characteristic driving forces: Δ μy,Δ μf,Δ μc o , and Δ μR . For the absolute value of the driving force, |Δ μ | is smaller than Max[ Δ μy,Δ μf] , the step attachment-detachments are inhibited, and the vicinal surface consists of (001) terraces and the (111) side surfaces of the faceted macrosteps. For Max[ Δ μy,Δ μf]<|Δ μ |<Δ μc o , the surface grows/recedes intermittently through the two-dimensional (2D) heterogeneous nucleation at the facet edge of the macrostep. For Δ μc o<|Δ μ | <Δ μR , the surface grows/recedes with the successive attachment-detachment of steps to/from a macrostep. When |Δ μ | exceeds Δ μR , the macrostep vanishes and the surface roughens kinetically. Classical 2D heterogeneous multinucleation was determined to be valid with slight modifications based on the Monte Carlo results of the step velocity and the change in the surface slope of the "terrace." The finite-size effects were also determined to be distinctive near equilibrium.
Delving Into Dissipative Quantum Dynamics: From Approximate to Numerically Exact Approaches
NASA Astrophysics Data System (ADS)
Chen, Hsing-Ta
In this thesis, I explore dissipative quantum dynamics of several prototypical model systems via various approaches, ranging from approximate to numerically exact schemes. In particular, in the realm of the approximate I explore the accuracy of Pade-resummed master equations and the fewest switches surface hopping (FSSH) algorithm for the spin-boson model, and non-crossing approximations (NCA) for the Anderson-Holstein model. Next, I develop new and exact Monte Carlo approaches and test them on the spin-boson model. I propose well-defined criteria for assessing the accuracy of Pade-resummed quantum master equations, which correctly demarcate the regions of parameter space where the Pade approximation is reliable. I continue the investigation of spin-boson dynamics by benchmark comparisons of the semiclassical FSSH algorithm to exact dynamics over a wide range of parameters. Despite small deviations from golden-rule scaling in the Marcus regime, standard surface hopping algorithm is found to be accurate over a large portion of parameter space. The inclusion of decoherence corrections via the augmented FSSH algorithm improves the accuracy of dynamical behavior compared to exact simulations, but the effects are generally not dramatic for the cases I consider. Next, I introduce new methods for numerically exact real-time simulation based on real-time diagrammatic Quantum Monte Carlo (dQMC) and the inchworm algorithm. These methods optimally recycle Monte Carlo information from earlier times to greatly suppress the dynamical sign problem. In the context of the spin-boson model, I formulate the inchworm expansion in two distinct ways: the first with respect to an expansion in the system-bath coupling and the second as an expansion in the diabatic coupling. In addition, a cumulant version of the inchworm Monte Carlo method is motivated by the latter expansion, which allows for further suppression of the growth of the sign error. I provide a comprehensive comparison of the performance of the inchworm Monte Carlo algorithms to other exact methodologies as well as a discussion of the relative advantages and disadvantages of each. Finally, I investigate the dynamical interplay between the electron-electron interaction and the electron-phonon coupling within the Anderson-Holstein model via two complementary NCAs: the first is constructed around the weak-coupling limit and the second around the polaron limit. The influence of phonons on spectral and transport properties is explored in equilibrium, for non-equilibrium steady state and for transient dynamics after a quench. I find the two NCAs disagree in nontrivial ways, indicating that more reliable approaches to the problem are needed. The complementary frameworks used here pave the way for numerically exact methods based on inchworm dQMC algorithms capable of treating open systems simultaneously coupled to multiple fermionic and bosonic baths.
Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region
NASA Astrophysics Data System (ADS)
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.
Hybrid Monte Carlo-diffusion method for light propagation in tissue with a low-scattering region.
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Salvio, A.; Bedwani, S.; Carrier, J-F.
2014-08-15
Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization frommore » single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.« less
Validation of OSLD and a treatment planning system for surface dose determination in IMRT treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuang, Audrey H., E-mail: hzhuang@usc.edu; Olch, Arthur J.
2014-08-15
Purpose: To evaluate the accuracy of skin dose determination for composite multibeam 3D conformal radiation therapy (3DCRT) and intensity modulated radiation therapy (IMRT) treatments using optically stimulated luminescent dosimeters (OSLDs) and Eclipse treatment planning system. Methods: Surface doses measured by OSLDs in the buildup region for open field 6 MV beams, either perpendicular or oblique to the surface, were evaluated by comparing against dose measured by Markus Parallel Plate (PP) chamber, surface diodes, and calculated by Monte Carlo simulations. The accuracy of percent depth dose (PDD) calculation in the buildup region from the authors’ Eclipse system (Version 10), which wasmore » precisely commissioned in the buildup region and was used with 1 mm calculation grid, was also evaluated by comparing to PP chamber measurements and Monte Carlo simulations. Finally, an anthropomorphic pelvic phantom was CT scanned with OSLDs in place at three locations. A planning target volume (PTV) was defined that extended close to the surface. Both an 8 beam 3DCRT and IMRT plan were generated in Eclipse. OSLDs were placed at the CT scanned reference locations to measure the skin doses and were compared to diode measurements and Eclipse calculations. Efforts were made to ensure that the dose comparison was done at the effective measurement points of each detector and corresponding locations in CT images. Results: The depth of the effective measurement point is 0.8 mm for OSLD when used in the buildup region in a 6 MV beam and is 0.7 mm for the authors’ surface diode. OSLDs and Eclipse system both agree well with Monte Carlo and/or Markus PP ion chamber and/or diode in buildup regions in 6 MV beams with normal or oblique incidence and across different field sizes. For the multiple beam 3DCRT plan and IMRT plans, the differences between OSLDs and Eclipse calculations on the surface of the anthropomorphic phantom were within 3% and distance-to-agreement less than 0.3 mm. Conclusions: The authors’ experiment showed that OSLD is an accurate dosimeter for skin dose measurements in complex 3DCRT or IMRT plans. It also showed that an Eclipse system with accurate commissioning of the data in the buildup region and 1 mm calculation grid can calculate surface doses with high accuracy and has a potential to replacein vivo measurements.« less
GE781: a Monte Carlo package for fixed target experiments
NASA Astrophysics Data System (ADS)
Davidenko, G.; Funk, M. A.; Kim, V.; Kuropatkin, N.; Kurshetsov, V.; Molchanov, V.; Rud, S.; Stutte, L.; Verebryusov, V.; Zukanovich Funchal, R.
The Monte Carlo package for the fixed target experiment B781 at Fermilab, a third generation charmed baryon experiment, is described. This package is based on GEANT 3.21, ADAMO database and DAFT input/output routines.
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Monte Carlo calculations of k{sub Q}, the beam quality conversion factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, B. R.; Rogers, D. W. O.
2010-11-15
Purpose: To use EGSnrc Monte Carlo simulations to directly calculate beam quality conversion factors, k{sub Q}, for 32 cylindrical ionization chambers over a range of beam qualities and to quantify the effect of systematic uncertainties on Monte Carlo calculations of k{sub Q}. These factors are required to use the TG-51 or TRS-398 clinical dosimetry protocols for calibrating external radiotherapy beams. Methods: Ionization chambers are modeled either from blueprints or manufacturers' user's manuals. The dose-to-air in the chamber is calculated using the EGSnrc user-code egs{sub c}hamber using 11 different tabulated clinical photon spectra for the incident beams. The dose to amore » small volume of water is also calculated in the absence of the chamber at the midpoint of the chamber on its central axis. Using a simple equation, k{sub Q} is calculated from these quantities under the assumption that W/e is constant with energy and compared to TG-51 protocol and measured values. Results: Polynomial fits to the Monte Carlo calculated k{sub Q} factors as a function of beam quality expressed as %dd(10){sub x} and TPR{sub 10}{sup 20} are given for each ionization chamber. Differences are explained between Monte Carlo calculated values and values from the TG-51 protocol or calculated using the computer program used for TG-51 calculations. Systematic uncertainties in calculated k{sub Q} values are analyzed and amount to a maximum of one standard deviation uncertainty of 0.99% if one assumes that photon cross-section uncertainties are uncorrelated and 0.63% if they are assumed correlated. The largest components of the uncertainty are the constancy of W/e and the uncertainty in the cross-section for photons in water. Conclusions: It is now possible to calculate k{sub Q} directly using Monte Carlo simulations. Monte Carlo calculations for most ionization chambers give results which are comparable to TG-51 values. Discrepancies can be explained using individual Monte Carlo calculations of various correction factors which are more accurate than previously used values. For small ionization chambers with central electrodes composed of high-Z materials, the effect of the central electrode is much larger than that for the aluminum electrodes in Farmer chambers.« less
Next nearest neighbors sites and the reactivity of the CO NO surface reaction
NASA Astrophysics Data System (ADS)
Cortés, Joaquín.; Valencia, Eliana
1998-04-01
Using Monte Carlo experiments of the reduction of NO by CO, a study is made of the effect on reactivity due to the formation of N 2O and to the increased coordination of the sites considering the next nearest neighbors sites (nnn) in a square lattice of superficial sites.
Teich-McGoldrick, Stephanie L.; Greathouse, Jeffery A.; Jove-Colon, Carlos F.; ...
2015-08-27
In this study, the swelling properties of smectite clay minerals are relevant to many engineering applications including environmental remediation, repository design for nuclear waste disposal, borehole stability in drilling operations, and additives for numerous industrial processes and commercial products. We used molecular dynamics and grand canonical Monte Carlo simulations to study the effects of layer charge location, interlayer cation, and temperature on intracrystalline swelling of montmorillonite and beidellite clay minerals. For a beidellite model with layer charge exclusively in the tetrahedral sheet, strong ion–surface interactions shift the onset of the two-layer hydrate to higher water contents. In contrast, for amore » montmorillonite model with layer charge exclusively in the octahedral sheet, weaker ion–surface interactions result in the formation of fully hydrated ions (two-layer hydrate) at much lower water contents. Clay hydration enthalpies and interlayer atomic density profiles are consistent with the swelling results. Water adsorption isotherms from grand canonical Monte Carlo simulations are used to relate interlayer hydration states to relative humidity, in good agreement with experimental findings.« less
NASA Astrophysics Data System (ADS)
Wang, Xiaoyu; Schattner, Yoni; Berg, Erez; Fernandes, Rafael
The maximum transition temperature Tc observed in the phase diagrams of several unconventional superconductors takes place in the vicinity of a putative antiferromagnetic quantum critical point. This observation motivated the theoretical proposal that superconductivity in these systems may be driven by quantum critical fluctuations, which in turn can also promote non-Fermi liquid behavior. In this talk, we present a combined analytical and sign-problem-free Quantum Monte Carlo investigation of the spin-fermion model - a widely studied low-energy model for the interplay between superconductivity and magnetic fluctuations. By engineering a series of band dispersions that interpolate between near-nested and open Fermi surfaces, and by also varying the strength of the spin-fermion interaction, we find that the hot spots of the Fermi surface provide the dominant contribution to the pairing instability in this model. We show that the analytical expressions for Tc and for the pairing susceptibility, obtained within a large-N Eliashberg approximation to the spin-fermion model, agree well with the Quantum Monte Carlo data, even in the regime of interactions comparable to the electronic bandwidth. DE-SC0012336.
Hongo, Kenta; Cuong, Nguyen Thanh; Maezono, Ryo
2013-02-12
We report fixed-node diffusion Monte Carlo (DMC) calculations of stacking interaction energy between two adenine(A)-thymine(T) base pairs in B-DNA (AA:TT), for which reference data are available, obtained from a complete basis set estimate of CCSD(T) (coupled-cluster with singles, doubles, and perturbative triples). We consider four sets of nodal surfaces obtained from self-consistent field calculations and examine how the different nodal surfaces affect the DMC potential energy curves of the AA:TT molecule and the resulting stacking energies. We find that the DMC potential energy curves using the different nodes look similar to each other as a whole. We also benchmark the performance of various quantum chemistry methods, including Hartree-Fock (HF) theory, second-order Møller-Plesset perturbation theory (MP2), and density functional theory (DFT). The DMC and recently developed DFT results of the stacking energy reasonably agree with the reference, while the HF, MP2, and conventional DFT methods give unsatisfactory results.
"First-principles" kinetic Monte Carlo simulations revisited: CO oxidation over RuO2 (110).
Hess, Franziska; Farkas, Attila; Seitsonen, Ari P; Over, Herbert
2012-03-15
First principles-based kinetic Monte Carlo (kMC) simulations are performed for the CO oxidation on RuO(2) (110) under steady-state reaction conditions. The simulations include a set of elementary reaction steps with activation energies taken from three different ab initio density functional theory studies. Critical comparison of the simulation results reveals that already small variations in the activation energies lead to distinctly different reaction scenarios on the surface, even to the point where the dominating elementary reaction step is substituted by another one. For a critical assessment of the chosen energy parameters, it is not sufficient to compare kMC simulations only to experimental turnover frequency (TOF) as a function of the reactant feed ratio. More appropriate benchmarks for kMC simulations are the actual distribution of reactants on the catalyst's surface during steady-state reaction, as determined by in situ infrared spectroscopy and in situ scanning tunneling microscopy, and the temperature dependence of TOF in the from of Arrhenius plots. Copyright © 2012 Wiley Periodicals, Inc.
A global reaction route mapping-based kinetic Monte Carlo algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Izaac; Page, Alister J., E-mail: sirle@chem.nagoya-u.ac.jp, E-mail: alister.page@newcastle.edu.au; Irle, Stephan, E-mail: sirle@chem.nagoya-u.ac.jp, E-mail: alister.page@newcastle.edu.au
2016-07-14
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculatedmore » on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.« less
Monte Carlo study of magnetic nanoparticles adsorbed on halloysite Al2Si2O5(OH) 4 nanotubes
NASA Astrophysics Data System (ADS)
Sotnikov, O. M.; Mazurenko, V. V.; Katanin, A. A.
2017-12-01
We study properties of magnetic nanoparticles adsorbed on the halloysite surface. For that a distinct magnetic Hamiltonian with a random distribution of spins on a cylindrical surface was solved by using a nonequilibrium Monte Carlo method. The parameters for our simulations, the anisotropy constant, nanoparticle size distribution, saturated magnetization, and geometrical characteristics of the halloysite template, were taken from recent experiments. We calculate the hysteresis loops and temperature dependence of the zero-field-cooling (ZFC) susceptibility, the maximum of which determines the blocking temperature. It is shown that the dipole-dipole interaction between nanoparticles moderately increases the blocking temperature and weakly increases the coercive force. The obtained hysteresis loops (e.g., the value of the coercive force) for Ni nanoparticles are in reasonable agreement with the experimental data. We also discuss the sensitivity of the hysteresis loops and ZFC susceptibilities to the change in anisotropy and dipole-dipole interaction, as well as the 3 d -shell occupation of the metallic nanoparticles; in particular we predict larger coercive force for Fe than for Ni nanoparticles.
Entropic Repulsion Between Fluctuating Surfaces
NASA Astrophysics Data System (ADS)
Janke, W.
The statistical mechanics of fluctuating surfaces plays an important role in a variety of physical systems, ranging from biological membranes to world sheets of strings in theories of fundamental interactions. In many applications it is a good approximation to assume that the surfaces possess no tension. Their statistical properties are then governed by curvature energies only, which allow for gigantic out-of-plane undulations. These fluctuations are the “entropic” origin of long-range repulsive forces in layered surface systems. Theoretical estimates of these forces for simple model surfaces are surveyed and compared with recent Monte Carlo simulations.
Monte Carlo Particle Lists: MCPL
NASA Astrophysics Data System (ADS)
Kittelmann, T.; Klinkby, E.; Knudsen, E. B.; Willendrup, P.; Cai, X. X.; Kanaki, K.
2017-09-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.
2014-03-31
The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.
Quantum interference and Monte Carlo simulations of multiparticle production
NASA Astrophysics Data System (ADS)
Bialas, A.; Krzywicki, A.
1995-02-01
We show that the effects of quantum interference can be implemented in Monte Carlo generators by modelling the generalized Wigner functions. A specific prescription for an appropriate modification of the weights of events produced by standard generators is proposed.
Scalable Domain Decomposed Monte Carlo Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
ERIC Educational Resources Information Center
Houser, Larry L.
1981-01-01
Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)
Electrostatic correlations in inhomogeneous charged fluids beyond loop expansion
NASA Astrophysics Data System (ADS)
Buyukdagli, Sahin; Achim, C. V.; Ala-Nissila, T.
2012-09-01
Electrostatic correlation effects in inhomogeneous symmetric electrolytes are investigated within a previously developed electrostatic self-consistent theory [R. R. Netz and H. Orland, Eur. Phys. J. E 11, 301 (2003)], 10.1140/epje/i2002-10159-0. To this aim, we introduce two computational approaches that allow to solve the self-consistent equations beyond the loop expansion. The first method is based on a perturbative Green's function technique, and the second one is an extension of a previously introduced semiclassical approximation for single dielectric interfaces to the case of slit nanopores. Both approaches can handle the case of dielectrically discontinuous boundaries where the one-loop theory is known to fail. By comparing the theoretical results obtained from these schemes with the results of the Monte Carlo simulations that we ran for ions at neutral single dielectric interfaces, we first show that the weak coupling Debye-Huckel theory remains quantitatively accurate up to the bulk ion density ρb ≃ 0.01 M, whereas the self-consistent theory exhibits a good quantitative accuracy up to ρb ≃ 0.2 M, thus improving the accuracy of the Debye-Huckel theory by one order of magnitude in ionic strength. Furthermore, we compare the predictions of the self-consistent theory with previous Monte Carlo simulation data for charged dielectric interfaces and show that the proposed approaches can also accurately handle the correlation effects induced by the surface charge in a parameter regime where the mean-field result significantly deviates from the Monte Carlo data. Then, we derive from the perturbative self-consistent scheme the one-loop theory of asymmetrically partitioned salt systems around a dielectrically homogeneous charged surface. It is shown that correlation effects originate in these systems from a competition between the salt screening loss at the interface driving the ions to the bulk region, and the interfacial counterion screening excess attracting them towards the surface. This competition can be quantified in terms of the characteristic surface charge σ _s^*=√{2ρ _b/(π ℓ _B)}, where ℓB = 7 Å is the Bjerrum length. In the case of weak surface charges σ _s≪ σ _s^* where counterions form a diffuse layer, the interfacial salt screening loss is the dominant effect. As a result, correlation effects decrease the mean-field density of both coions and counterions. With an increase of the surface charge towards σ _s^*, the surface-attractive counterion screening excess starts to dominate, and correlation effects amplify in this regime the mean-field density of both type of ions. However, in the regime σ _s>σ _s^*, the same counterion screening excess also results in a significant decrease of the electrostatic mean-field potential. This reduces in turn the mean-field counterion density far from the charged surface. We also show that for σ _s≫ σ _s^*, electrostatic correlations result in a charge inversion effect. However, the electrostatic coupling regime where this phenomenon takes place should be verified with Monte Carlo simulations since this parameter regime is located beyond the validity range of the one-loop theory.
Electrostatic correlations in inhomogeneous charged fluids beyond loop expansion.
Buyukdagli, Sahin; Achim, C V; Ala-Nissila, T
2012-09-14
Electrostatic correlation effects in inhomogeneous symmetric electrolytes are investigated within a previously developed electrostatic self-consistent theory [R. R. Netz and H. Orland, Eur. Phys. J. E 11, 301 (2003)]. To this aim, we introduce two computational approaches that allow to solve the self-consistent equations beyond the loop expansion. The first method is based on a perturbative Green's function technique, and the second one is an extension of a previously introduced semiclassical approximation for single dielectric interfaces to the case of slit nanopores. Both approaches can handle the case of dielectrically discontinuous boundaries where the one-loop theory is known to fail. By comparing the theoretical results obtained from these schemes with the results of the Monte Carlo simulations that we ran for ions at neutral single dielectric interfaces, we first show that the weak coupling Debye-Huckel theory remains quantitatively accurate up to the bulk ion density ρ(b) ≃ 0.01 M, whereas the self-consistent theory exhibits a good quantitative accuracy up to ρ(b) ≃ 0.2 M, thus improving the accuracy of the Debye-Huckel theory by one order of magnitude in ionic strength. Furthermore, we compare the predictions of the self-consistent theory with previous Monte Carlo simulation data for charged dielectric interfaces and show that the proposed approaches can also accurately handle the correlation effects induced by the surface charge in a parameter regime where the mean-field result significantly deviates from the Monte Carlo data. Then, we derive from the perturbative self-consistent scheme the one-loop theory of asymmetrically partitioned salt systems around a dielectrically homogeneous charged surface. It is shown that correlation effects originate in these systems from a competition between the salt screening loss at the interface driving the ions to the bulk region, and the interfacial counterion screening excess attracting them towards the surface. This competition can be quantified in terms of the characteristic surface charge σ(s)*=√(2ρ(b)/(πl(B)), where l(B) = 7 Å is the Bjerrum length. In the case of weak surface charges σ(s)≪σ(s)* where counterions form a diffuse layer, the interfacial salt screening loss is the dominant effect. As a result, correlation effects decrease the mean-field density of both coions and counterions. With an increase of the surface charge towards σ(s)*, the surface-attractive counterion screening excess starts to dominate, and correlation effects amplify in this regime the mean-field density of both type of ions. However, in the regime σ(s)>σ(s)*, the same counterion screening excess also results in a significant decrease of the electrostatic mean-field potential. This reduces in turn the mean-field counterion density far from the charged surface. We also show that for σ(s)≫σ(s)*, electrostatic correlations result in a charge inversion effect. However, the electrostatic coupling regime where this phenomenon takes place should be verified with Monte Carlo simulations since this parameter regime is located beyond the validity range of the one-loop theory.
A variational Monte Carlo study of different spin configurations of electron-hole bilayer
NASA Astrophysics Data System (ADS)
Sharma, Rajesh O.; Saini, L. K.; Bahuguna, Bhagwati Prasad
2018-05-01
We report quantum Monte Carlo results for mass-asymmetric electron-hole bilayer (EHBL) system with different-different spin configurations. Particularly, we apply a variational Monte Carlo method to estimate the ground-state energy, condensate fraction and pair-correlations function at fixed density rs = 5 and interlayer distance d = 1 a.u. We find that spin-configuration of EHBL system, which consists of only up-electrons in one layer and down-holes in other i.e. ferromagnetic arrangement within layers and anti-ferromagnetic across the layers, is more stable than the other spin-configurations considered in this study.
MC3: Multi-core Markov-chain Monte Carlo code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan
2016-10-01
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
92 Years of the Ising Model: A High Resolution Monte Carlo Study
NASA Astrophysics Data System (ADS)
Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.
2018-04-01
Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.
NASA Astrophysics Data System (ADS)
Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola
2017-04-01
In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
HepSim: A repository with predictions for high-energy physics experiments
Chekanov, S. V.
2015-02-03
A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations and for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. Data streaming over a network for end-user analysis is discussed.
NASA Astrophysics Data System (ADS)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm
NASA Astrophysics Data System (ADS)
Gubernatis, James
2014-03-01
A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.
Validation of the Monte Carlo simulator GATE for indium-111 imaging.
Assié, K; Gardin, I; Véra, P; Buvat, I
2005-07-07
Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clovas, A.; Zanthos, S.; Antonopoulos-Domis, M.
2000-03-01
The dose rate conversion factors {dot D}{sub CF} (absorbed dose rate in air per unit activity per unit of soil mass, nGy h{sup {minus}1} per Bq kg{sup {minus}1}) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: (1) The MCNP code of Los Alamos; (2) The GEANT code of CERN; and (3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained bymore » the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the {dot D}{sub CF} values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good agreement (less than 15% of difference) for photon energies above 1,500 keV. Antithetically, the agreement is not as good (difference of 20--30%) for the low energy photons.« less
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo
NASA Astrophysics Data System (ADS)
Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik
2018-05-01
Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.
Derenzo, Stephen E
2017-01-01
This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values. PMID:28327464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derenzo, Stephen E.
Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less
Derenzo, Stephen E.
2017-04-11
Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less
Umari, P; Marzari, Nicola
2009-09-07
We calculate the linear and nonlinear susceptibilities of periodic longitudinal chains of hydrogen dimers with different bond-length alternations using a diffusion quantum Monte Carlo approach. These quantities are derived from the changes in electronic polarization as a function of applied finite electric field--an approach we recently introduced and made possible by the use of a Berry-phase, many-body electric-enthalpy functional. Calculated susceptibilities and hypersusceptibilities are found to be in excellent agreement with the best estimates available from quantum chemistry--usually extrapolations to the infinite-chain limit of calculations for chains of finite length. It is found that while exchange effects dominate the proper description of the susceptibilities, second hypersusceptibilities are greatly affected by electronic correlations. We also assess how different approximations to the nodal surface of the many-body wave function affect the accuracy of the calculated susceptibilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallory, Joel D.; Mandelshtam, Vladimir A.
2015-10-14
The diffusion Monte Carlo (DMC) method is applied to compute the ground state energies of the water monomer and dimer and their D{sub 2}O isotopomers using MB-pol; the most recent and most accurate ab inito-based potential energy surface (PES). MB-pol has already demonstrated excellent agreement with high level electronic structure data, as well as agreement with some experimental, spectroscopic, and thermodynamic data. Here, the DMC binding energies of (H{sub 2}O){sub 2} and (D{sub 2}O){sub 2} agree with the corresponding values obtained from velocity map imaging within, respectively, 0.01 and 0.02 kcal/mol. This work adds two more valuable data points thatmore » highlight the accuracy of the MB-pol PES.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Tianxing; Lin, Hai-Qing; Gubernatis, James E.
2015-09-01
By using the constrained-phase quantum Monte Carlo method, we performed a systematic study of the pairing correlations in the ground state of the doped Kane-Mele-Hubbard model on a honeycomb lattice. We find that pairing correlations with d + id symmetry dominate close to half filling, but pairing correlations with p+ip symmetry dominate as hole doping moves the system below three-quarters filling. We correlate these behaviors of the pairing correlations with the topology of the Fermi surfaces of the non-interacting problem. We also find that the effective pairing correlation is enhanced greatly as the interaction increases, and these superconducting correlations aremore » robust against varying the spin-orbit coupling strength. Finally, our numerical results suggest a possible way to realize spin triplet superconductivity in doped honeycomb-like materials or ultracold atoms in optical traps.« less
PENTrack - a versatile Monte Carlo tool for ultracold neutron sources and experiments
NASA Astrophysics Data System (ADS)
Picker, Ruediger; Chahal, Sanmeet; Christopher, Nicolas; Losekamm, Martin; Marcellin, James; Paul, Stephan; Schreyer, Wolfgang; Yapa, Pramodh
2016-09-01
Ultracold neutrons have energies in the hundred nano eV region. They can be stored in traps for hundreds of seconds. This makes them the ideal tool to study the neutron itself. Measurements of neutron decay correlations, lifetime or electric dipole moment are ideally suited for ultracold neutrons, as well as experiments probing the neutron's gravitational levels in the earth's field. We have developed a Monte Carlo simulation tool that can serve to design and optimize these experiments, and possibly correct results: PENTrack is a C++ based simulation code that tracks neutrons, protons and electrons or atoms, as well as their spins, in gravitational and electromagnetic fields. In addition wall interactions of neutrons due to strong interaction are modeled with a Fermi-potential formalism and take surface roughness into account. The presentation will introduce the physics behind the simulation and provide examples of its application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de
2016-06-01
In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface.more » We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.« less
Wang, Tongyu; Reuter, Karsten
2015-11-24
We present a density-functional theory based kinetic Monte Carlo study of CO oxidation at the (111) facet of RuO 2. We compare the detailed insight into elementary processes, steady-state surface coverages, and catalytic activity to equivalent published simulation data for the frequently studied RuO 2(110) facet. Qualitative differences are identified in virtually every aspect ranging from binding energetics over lateral interactions to the interplay of elementary processes at the different active sites. Nevertheless, particularly at technologically relevant elevated temperatures, near-ambient pressures and near-stoichiometric feeds both facets exhibit almost identical catalytic activity. As a result, these findings challenge the traditional definitionmore » of structure sensitivity based on macroscopically observable turnover frequencies and prompt scrutiny of the applicability of structure sensitivity classifications developed for metals to oxide catalysis.« less
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Tongyu; Reuter, Karsten, E-mail: karsten.reuter@ch.tum.de; SUNCAT Center for Interface Science and Catalysis, SLAC National Accelerator Laboratory and Stanford University, 443 Via Ortega, Stanford, California 94035-4300
2015-11-28
We present a density-functional theory based kinetic Monte Carlo study of CO oxidation at the (111) facet of RuO{sub 2}. We compare the detailed insight into elementary processes, steady-state surface coverages, and catalytic activity to equivalent published simulation data for the frequently studied RuO{sub 2}(110) facet. Qualitative differences are identified in virtually every aspect ranging from binding energetics over lateral interactions to the interplay of elementary processes at the different active sites. Nevertheless, particularly at technologically relevant elevated temperatures, near-ambient pressures and near-stoichiometric feeds both facets exhibit almost identical catalytic activity. These findings challenge the traditional definition of structure sensitivitymore » based on macroscopically observable turnover frequencies and prompt scrutiny of the applicability of structure sensitivity classifications developed for metals to oxide catalysis.« less
Efficiency of Moderated Neutron Lithium Glass Detectors Using Monte Carlo Techniques
NASA Astrophysics Data System (ADS)
James, Brian
2011-10-01
Due to national security concerns over the smuggling of special nuclear materials and the small supply of He-3 for use in neutron detectors, there is a great need for a new kind of neutron detector. Using Monte Carlo techniques I have been studying the use of lithium glass in varying configurations for neutron detectors. My research has included the effects of using a detector with two thin sheets of lithium at varying distances apart. I have also researched the effects of varying amounts of shielding a californium source with varying amounts of water. This is important since shielding would likely be used to make nuclear material more difficult to detect. The addition of one sheet of lithium-6 glass on the front surface of the detector significantly improves the efficiency for the detection of neutrons from a moderated fission source.
NASA Astrophysics Data System (ADS)
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
Wu, Yunzhao; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
Monte Carlo Simulations of Microchannel Plate Based, Fast-Gated X-Ray Imagers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu., M., Kruschwitz, C.
2011-02-01
This is a chapter in a book titled Applications of Monte Carlo Method in Science and Engineering Edited by: Shaul Mordechai ISBN 978-953-307-691-1, Hard cover, 950 pages Publisher: InTech Publication date: February 2011
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport
of viruses in the unsaturated zone. A database of input parameters
allowed Monte Carlo analysis with the model. The resulting kernel
densities of predicted attenuation during percolation indicated very ...
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidenko, V. D., E-mail: Davidenko-VD@nrcki.ru; Zinchenko, A. S., E-mail: zin-sn@mail.ru; Harchenko, I. K.
2016-12-15
Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.
Monte Carlo calculation of skyshine'' neutron dose from ALS (Advanced Light Source)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moin-Vasiri, M.
1990-06-01
This report discusses the following topics on skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations.
NASA Astrophysics Data System (ADS)
Samin, Adib J.; Zhang, Jinsuo
2017-05-01
An accurate characterization of lanthanide adsorption and mobility on tungsten surfaces is important for pyroprocessing. In the present study, the adsorption and diffusion of gadolinium on the (100) surface of tungsten was investigated. It was found that the hollow sites were the most energetically favorable for the adsorption. It was further observed that a magnetic moment was induced following the adsorption of gadolinium on the tungsten surface and that the system with adsorbed hollow sites had the largest magnetization. A pathway for the surface diffusion of gadolinium was determined to occur by hopping between the nearest neighbor hollow sites via the bridge site and the activation energy for the hop was calculated to be 0.75 eV. The surface diffusion process was further assessed using two distinct kinetic Monte Carlo models; one that accounted for lateral adsorbate interactions up to the second nearest neighbor and one that did not account for such interatomic interactions in the adlayer. When the lateral interactions were included in the simulations, the diffusivity was observed to have a strong dependence on coverage (for the coverage values being studied). The effects of lateral interactions were further observed in a one-dimensional simulation of the diffusion equation where the asymmetry in the surface coverage profile upon its approach to a steady state distribution was clear in comparison with the simulations which did not account for those interactions.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.
1981-01-01
A molecular dynamics technique based upon Lennard-Jones type pair interactions is used to investigate time-dependent as well as equilibrium properties. The case study deals with systems containing Si and O atoms. In this case a more involved potential energy function (PEF) is employed and the system is simulated via a Monte-Carlo procedure. This furnishes the equilibrium properties of the system at its interfaces and surfaces as well as in the bulk.
NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
NASA Astrophysics Data System (ADS)
Usta, Metin; Tufan, Mustafa Çağatay; Aydın, Güral; Bozkurt, Ahmet
2018-07-01
In this study, we have performed the calculations stopping power, depth dose, and range verification for proton beams using dielectric and Bethe-Bloch theories and FLUKA, Geant4 and MCNPX Monte Carlo codes. In the framework, as analytical studies, Drude model was applied for dielectric theory and effective charge approach with Roothaan-Hartree-Fock charge densities was used in Bethe theory. In the simulations different setup parameters were selected to evaluate the performance of three distinct Monte Carlo codes. The lung and breast tissues were investigated are considered to be related to the most common types of cancer throughout the world. The results were compared with each other and the available data in literature. In addition, the obtained results were verified with prompt gamma range data. In both stopping power values and depth-dose distributions, it was found that the Monte Carlo values give better results compared with the analytical ones while the results that agree best with ICRU data in terms of stopping power are those of the effective charge approach between the analytical methods and of the FLUKA code among the MC packages. In the depth dose distributions of the examined tissues, although the Bragg curves for Monte Carlo almost overlap, the analytical ones show significant deviations that become more pronounce with increasing energy. Verifications with the results of prompt gamma photons were attempted for 100-200 MeV protons which are regarded important for proton therapy. The analytical results are within 2%-5% and the Monte Carlo values are within 0%-2% as compared with those of the prompt gammas.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
Dosimetric verification of IMRT treatment planning using Monte Carlo simulations for prostate cancer
NASA Astrophysics Data System (ADS)
Yang, J.; Li, J.; Chen, L.; Price, R.; McNeeley, S.; Qin, L.; Wang, L.; Xiong, W.; Ma, C.-M.
2005-03-01
The purpose of this work is to investigate the accuracy of dose calculation of a commercial treatment planning system (Corvus, Normos Corp., Sewickley, PA). In this study, 30 prostate intensity-modulated radiotherapy (IMRT) treatment plans from the commercial treatment planning system were recalculated using the Monte Carlo method. Dose-volume histograms and isodose distributions were compared. Other quantities such as minimum dose to the target (Dmin), the dose received by 98% of the target volume (D98), dose at the isocentre (Diso), mean target dose (Dmean) and the maximum critical structure dose (Dmax) were also evaluated based on our clinical criteria. For coplanar plans, the dose differences between Monte Carlo and the commercial treatment planning system with and without heterogeneity correction were not significant. The differences in the isocentre dose between the commercial treatment planning system and Monte Carlo simulations were less than 3% for all coplanar cases. The differences on D98 were less than 2% on average. The differences in the mean dose to the target between the commercial system and Monte Carlo results were within 3%. The differences in the maximum bladder dose were within 3% for most cases. The maximum dose differences for the rectum were less than 4% for all the cases. For non-coplanar plans, the difference in the minimum target dose between the treatment planning system and Monte Carlo calculations was up to 9% if the heterogeneity correction was not applied in Corvus. This was caused by the excessive attenuation of the non-coplanar beams by the femurs. When the heterogeneity correction was applied in Corvus, the differences were reduced significantly. These results suggest that heterogeneity correction should be used in dose calculation for prostate cancer with non-coplanar beam arrangements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Xu, X
Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based onmore » Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderon, E; Siergiej, D
2014-06-01
Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, J; Culberson, W; DeWerd, L
Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate themore » absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an extrapolation chamber to provide accurate dose rate measurements for a planar ophthalmic applicator.« less
An image-guided precision proton radiation platform for preclinical in vivo research
NASA Astrophysics Data System (ADS)
Ford, E.; Emery, R.; Huff, D.; Narayanan, M.; Schwartz, J.; Cao, N.; Meyer, J.; Rengan, R.; Zeng, J.; Sandison, G.; Laramore, G.; Mayr, N.
2017-01-01
There are many unknowns in the radiobiology of proton beams and other particle beams. We describe the development and testing of an image-guided low-energy proton system optimized for radiobiological research applications. A 50 MeV proton beam from an existing cyclotron was modified to produce collimated beams (as small as 2 mm in diameter). Ionization chamber and radiochromic film measurements were performed and benchmarked with Monte Carlo simulations (TOPAS). The proton beam was aligned with a commercially-available CT image-guided x-ray irradiator device (SARRP, Xstrahl Inc.). To examine the alternative possibility of adapting a clinical proton therapy system, we performed Monte Carlo simulations of a range-shifted 100 MeV clinical beam. The proton beam exhibits a pristine Bragg Peak at a depth of 21 mm in water with a dose rate of 8.4 Gy min-1 (3 mm depth). The energy of the incident beam can be modulated to lower energies while preserving the Bragg peak. The LET was: 2.0 keV µm-1 (water surface), 16 keV µm-1 (Bragg peak), 27 keV µm-1 (10% peak dose). Alignment of the proton beam with the SARRP system isocenter was measured at 0.24 mm agreement. The width of the beam changes very little with depth. Monte Carlo-based calculations of dose using the CT image data set as input demonstrate in vivo use. Monte Carlo simulations of the modulated 100 MeV clinical proton beam show a significantly reduced Bragg peak. We demonstrate the feasibility of a proton beam integrated with a commercial x-ray image-guidance system for preclinical in vivo studies. To our knowledge this is the first description of an experimental image-guided proton beam for preclinical radiobiology research. It will enable in vivo investigations of radiobiological effects in proton beams.
A Variational Monte Carlo Approach to Atomic Structure
ERIC Educational Resources Information Center
Davis, Stephen L.
2007-01-01
The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.
Does standard Monte Carlo give justice to instantons?
NASA Astrophysics Data System (ADS)
Fucito, F.; Solomon, S.
1984-01-01
The results of the standard local Monte Carlo are changed by offering instantons as candidates in the Metropolis procedure. We also define an O(3) topological charge with no contribution from planar dislocations. The RG behavior is still not recovered. Bantrell Fellow in Theoretical Physics.
NASA Astrophysics Data System (ADS)
Cerutti, F.
2017-09-01
The role of Monte Carlo calculations in addressing machine protection and radiation protection challenges regarding accelerator design and operation is discussed, through an overview of different applications and validation examples especially referring to recent LHC measurements.
Using Stan for Item Response Theory Models
ERIC Educational Resources Information Center
Ames, Allison J.; Au, Chi Hang
2018-01-01
Stan is a flexible probabilistic programming language providing full Bayesian inference through Hamiltonian Monte Carlo algorithms. The benefits of Hamiltonian Monte Carlo include improved efficiency and faster inference, when compared to other MCMC software implementations. Users can interface with Stan through a variety of computing…
NASA Astrophysics Data System (ADS)
Qian, Lin-Feng; Shi, Guo-Dong; Huang, Yong; Xing, Yu-Ming
2017-10-01
In vector radiative transfer, backward ray tracing is seldom used. We present a backward and forward Monte Carlo method to simulate vector radiative transfer in a two-dimensional graded index medium, which is new and different from the conventional Monte Carlo method. The backward and forward Monte Carlo method involves dividing the ray tracing into two processes backward tracing and forward tracing. In multidimensional graded index media, the trajectory of a ray is usually a three-dimensional curve. During the transport of a polarization ellipse, the curved ray trajectory will induce geometrical effects and cause Stokes parameters to continuously change. The solution processes for a non-scattering medium and an anisotropic scattering medium are analysed. We also analyse some parameters that influence the Stokes vector in two-dimensional graded index media. The research shows that the Q component of the Stokes vector cannot be ignored. However, the U and V components of the Stokes vector are very small.
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou Yu, E-mail: yzou@Princeton.ED; Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED; Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED
2010-07-20
The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations bymore » exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.« less
ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations
NASA Astrophysics Data System (ADS)
Freitag, Marc Dewi
2013-02-01
ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).
Data decomposition of Monte Carlo particle transport simulations via tally servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Semi-stochastic full configuration interaction quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.
2012-02-01
In the recently proposed full configuration interaction quantum Monte Carlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion Monte Carlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum Monte Carlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).
Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams
NASA Astrophysics Data System (ADS)
Willow, Soohaeng Yoo; Hirata, So
2014-01-01
A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.
NASA Astrophysics Data System (ADS)
Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki
2011-08-01
A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.
Neutrality and evolvability of designed protein sequences
NASA Astrophysics Data System (ADS)
Bhattacherjee, Arnab; Biswas, Parbati
2010-07-01
The effect of foldability on protein’s evolvability is analyzed by a two-prong approach consisting of a self-consistent mean-field theory and Monte Carlo simulations. Theory and simulation models representing protein sequences with binary patterning of amino acid residues compatible with a particular foldability criteria are used. This generalized foldability criterion is derived using the high temperature cumulant expansion approximating the free energy of folding. The effect of cumulative point mutations on these designed proteins is studied under neutral condition. The robustness, protein’s ability to tolerate random point mutations is determined with a selective pressure of stability (ΔΔG) for the theory designed sequences, which are found to be more robust than that of Monte Carlo and mean-field-biased Monte Carlo generated sequences. The results show that this foldability criterion selects viable protein sequences more effectively compared to the Monte Carlo method, which has a marked effect on how the selective pressure shapes the evolutionary sequence space. These observations may impact de novo sequence design and its applications in protein engineering.
Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Gentile, Nick
This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy andmore » performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.« less
Entanglement and the fermion sign problem in auxiliary field quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Broecker, Peter; Trebst, Simon
2016-08-01
Quantum Monte Carlo simulations of fermions are hampered by the notorious sign problem whose most striking manifestation is an exponential growth of sampling errors with the number of particles. With the sign problem known to be an NP-hard problem and any generic solution thus highly elusive, the Monte Carlo sampling of interacting many-fermion systems is commonly thought to be restricted to a small class of model systems for which a sign-free basis has been identified. Here we demonstrate that entanglement measures, in particular the so-called Rényi entropies, can intrinsically exhibit a certain robustness against the sign problem in auxiliary-field quantum Monte Carlo approaches and possibly allow for the identification of global ground-state properties via their scaling behavior even in the presence of a strong sign problem. We corroborate these findings via numerical simulations of fermionic quantum phase transitions of spinless fermions on the honeycomb lattice at and below half filling.
VARIAN CLINAC 6 MeV Photon Spectra Unfolding using a Monte Carlo Meshed Model
NASA Astrophysics Data System (ADS)
Morató, S.; Juste, B.; Miró, R.; Verdú, G.
2017-09-01
Energy spectrum is the best descriptive function to determine photon beam quality of a Medical Linear Accelerator (LinAc). The use of realistic photon spectra in Monte Carlo simulations has a great importance to obtain precise dose calculations in Radiotherapy Treatment Planning (RTP). Reconstruction of photon spectra emitted by medical accelerators from measured depth dose distributions in a water cube is an important tool for commissioning a Monte Carlo treatment planning system. Regarding this, the reconstruction problem is an inverse radiation transport function which is ill conditioned and its solution may become unstable due to small perturbations in the input data. This paper presents a more stable spectral reconstruction method which can be used to provide an independent confirmation of source models for a given machine without any prior knowledge of the spectral distribution. Monte Carlo models used in this work are built with unstructured meshes to simulate with realism the linear accelerator head geometry.
SU-F-T-657: In-Room Neutron Dose From High Energy Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christ, D; Ding, G
Purpose: To estimate neutron dose inside the treatment room from photodisintegration events in high energy photon beams using Monte Carlo simulations and experimental measurements. Methods: The Monte Carlo code MCNP6 was used for the simulations. An Eberline ESP-1 Smart Portable Neutron Detector was used to measure neutron dose. A water phantom was centered at isocenter on the treatment couch, and the detector was placed near the phantom. A Varian 2100EX linear accelerator delivered an 18MV open field photon beam to the phantom at 400MU/min, and a camera captured the detector readings. The experimental setup was modeled in the Monte Carlomore » simulation. The source was modeled for two extreme cases: a) hemispherical photon source emitting from the target and b) cone source with an angle of the primary collimator cone. The model includes the target, primary collimator, flattening filter, secondary collimators, water phantom, detector and concrete walls. Energy deposition tallies were measured for neutrons in the detector and for photons at the center of the phantom. Results: For an 18MV beam with an open 10cm by 10cm field and the gantry at 180°, the Monte Carlo simulations predict the neutron dose in the detector to be 0.11% of the photon dose in the water phantom for case a) and 0.01% for case b). The measured neutron dose is 0.04% of the photon dose. Considering the range of neutron dose predicted by Monte Carlo simulations, the calculated results are in good agreement with measurements. Conclusion: We calculated in-room neutron dose by using Monte Carlo techniques, and the predicted neutron dose is confirmed by experimental measurements. If we remodel the source as an electron beam hitting the target for a more accurate representation of the bremsstrahlung fluence, it is feasible that the Monte Carlo simulations can be used to help in shielding designs.« less
2006-05-31
dynamics (MD) and kinetic Monte Carlo ( KMC ) procedures. In 2D surface modeling our calculations project speedups of 9 orders of magnitude at 300 degrees...programming is used to perform customized statistical mechanics by bridging the different time scales of MD and KMC quickly and well. Speedups in
Fast quantum Monte Carlo on a GPU
NASA Astrophysics Data System (ADS)
Lutsyshyn, Y.
2015-02-01
We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.
Heterogeneous Hardware Parallelism Review of the IN2P3 2016 Computing School
NASA Astrophysics Data System (ADS)
Lafage, Vincent
2017-11-01
Parallel and hybrid Monte Carlo computation. The Monte Carlo method is the main workhorse for computation of particle physics observables. This paper provides an overview of various HPC technologies that can be used today: multicore (OpenMP, HPX), manycore (OpenCL). The rewrite of a twenty years old Fortran 77 Monte Carlo will illustrate the various programming paradigms in use beyond language implementation. The problem of parallel random number generator will be addressed. We will give a short report of the one week school dedicated to these recent approaches, that took place in École Polytechnique in May 2016.
Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis
NASA Technical Reports Server (NTRS)
Lindstrom, D. G.; Normand, E.; Wilcox, A. D.
1972-01-01
In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
Bold Diagrammatic Monte Carlo for Fermionic and Fermionized Systems
NASA Astrophysics Data System (ADS)
Svistunov, Boris
2013-03-01
In three different fermionic cases--repulsive Hubbard model, resonant fermions, and fermionized spins-1/2 (on triangular lattice)--we observe the phenomenon of sign blessing: Feynman diagrammatic series features finite convergence radius despite factorial growth of the number of diagrams with diagram order. Bold diagrammatic Monte Carlo technique allows us to sample millions of skeleton Feynman diagrams. With the universal fermionization trick we can fermionize essentially any (bosonic, spin, mixed, etc.) lattice system. The combination of fermionization and Bold diagrammatic Monte Carlo yields a universal first-principle approach to strongly correlated lattice systems, provided the sign blessing is a generic fermionic phenomenon. Supported by NSF and DARPA
The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy
NASA Astrophysics Data System (ADS)
Siebers, J. V.; Keall, P. J.; Mohan, R.
The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.
Mayers, Matthew Z.; Berkelbach, Timothy C.; Hybertsen, Mark S.; ...
2015-10-09
Ground-state diffusion Monte Carlo is used to investigate the binding energies and intercarrier radial probability distributions of excitons, trions, and biexcitons in a variety of two-dimensional transition-metal dichalcogenide materials. We compare these results to approximate variational calculations, as well as to analogous Monte Carlo calculations performed with simplified carrier interaction potentials. Our results highlight the successes and failures of approximate approaches as well as the physical features that determine the stability of small carrier complexes in monolayer transition-metal dichalcogenide materials. In conclusion, we discuss points of agreement and disagreement with recent experiments.
Geodesic Monte Carlo on Embedded Manifolds
Byrne, Simon; Girolami, Mark
2013-01-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024