Science.gov

Sample records for quasi-monte carlo integration

  1. Neutron transport calculations using Quasi-Monte Carlo methods

    SciTech Connect

    Moskowitz, B.S.

    1997-07-01

    This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points.

  2. Precision measurement of the top quark mass in the lepton + jets channel using a matrix element method with Quasi-Monte Carlo integration

    SciTech Connect

    Lujan, Paul Joseph

    2009-12-01

    This thesis presents a measurement of the top quark mass obtained from p$\\bar{p}$ collisions at √s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. The measurement uses a matrix element integration method to calculate a t$\\bar{t}$ likelihood, employing a Quasi-Monte Carlo integration, which enables us to take into account effects due to finite detector angular resolution and quark mass effects. We calculate a t$\\bar{t}$ likelihood as a 2-D function of the top pole mass mt and ΔJES, where ΔJES parameterizes the uncertainty in our knowledge of the jet energy scale; it is a shift applied to all jet energies in units of the jet-dependent systematic error. By introducing ΔJES into the likelihood, we can use the information contained in W boson decays to constrain ΔJES and reduce error due to this uncertainty. We use a neural network discriminant to identify events likely to be background, and apply a cut on the peak value of individual event likelihoods to reduce the effect of badly reconstructed events. This measurement uses a total of 4.3 fb-1 of integrated luminosity, requiring events with a lepton, large ET, and exactly four high-energy jets in the pseudorapidity range |η| < 2.0, of which at least one must be tagged as coming from a b quark. In total, we observe 738 events before and 630 events after applying the likelihood cut, and measure mt = 172.6 ± 0.9 (stat.) ± 0.7 (JES) ± 1.1 (syst.) GeV/c2, or mt = 172.6 ± 1.6 (tot.) GeV/c2.

  3. A quasi-Monte Carlo approach to efficient 3-D migration: Field data test

    SciTech Connect

    Zhou, C.; Chen, J.; Schuster, G.T.; Smith, B.A.

    1999-10-01

    The quasi-Monte Carlo migration algorithm is applied to a 3-D seismic data set from West Texas. The field data were finely sampled at approximately 220-ft intervals in the in-line direction but were sampled coarsely at approximately 1,320-ft intervals in the cross-line direction. The traces at the quasi-Monte Carlo points were obtained by an interpolation of the regularly sampled traces. The subsampled traces at the quasi-Monte Carlo points were migrated, and the resulting images were compared to those obtained by migrating both regular and uniform grids of traces. Results show that, consistent with theory, the quasi-Monte Carlo migration images contain fewer migration aliasing artifacts than the regular or uniform grid images. For these data, quasi-Monte Carlo migration apparently requires fewer than half the number of the traces needed by regular-grid or uniform-grid migration to give images of comparable quality. These results agree with related migration tests on synthetic data computed for point scatterer models. Results suggest that better migration images might result from data recorded on a coarse quasi-random grid compared to regular or uniform coarse grids.

  4. Quasi-Monte Carlo methods for lattice systems: A first look

    NASA Astrophysics Data System (ADS)

    Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.

    2014-03-01

    We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte

  5. [Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].

    PubMed

    Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao

    2015-05-01

    Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are

  6. [Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].

    PubMed

    Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao

    2015-05-01

    Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are

  7. Efficient computation of transient solutions of the chemical master equation based on uniformization and quasi-Monte Carlo.

    PubMed

    Hellander, Andreas

    2008-04-21

    A quasi-Monte Carlo method for the simulation of discrete time Markov chains is applied to the simulation of biochemical reaction networks. The continuous process is formulated as a discrete chain subordinate to a Poisson process using the method of uniformization. It is shown that a substantial reduction of the number of trajectories that is required for an accurate estimation of the probability density functions (PDFs) can be achieved with this technique. The method is applied to the simulation of two model problems. Although the technique employed here does not address the typical stiffness of biochemical reaction networks, it is useful when computing the PDF by replication. The method can also be used in conjuncture with hybrid methods that reduce the stiffness. PMID:18433192

  8. Quasi Monte Carlo-based Isotropic Distribution of Gradient Directions for Improved Reconstruction Quality of 3D EPR Imaging

    PubMed Central

    Ahmad, Rizwan; Deng, Yuanmu; Vikram, Deepti S.; Clymer, Bradley; Srinivasan, Parthasarathy; Zweier, Jay L.; Kuppusamy, Periannan

    2007-01-01

    In continuous wave (CW) electron paramagnetic resonance imaging (EPRI), high quality of reconstructed image along with fast and reliable data acquisition is highly desirable for many biological applications. An accurate representation of uniform distribution of projection data is necessary to ensure high reconstruction quality. The current techniques for data acquisition suffer from nonuniformities or local anisotropies in the distribution of projection data and present a poor approximation of a true uniform and isotropic distribution. In this work, we have implemented a technique based on Quasi-Monte Carlo method to acquire projections with more uniform and isotropic distribution of data over a 3D acquisition space. The proposed technique exhibits improvements in the reconstruction quality in terms of both mean-square-error and visual judgment. The effectiveness of the suggested technique is demonstrated using computer simulations and 3D EPRI experiments. The technique is robust and exhibits consistent performance for different object configurations and orientations. PMID:17095271

  9. Quasi-Monte Carlo based global uncertainty and sensitivity analysis in modeling free product migration and recovery from petroleum-contaminated aquifers.

    PubMed

    He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi

    2012-06-15

    This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested.

  10. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  11. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  12. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  13. Path integral hybrid Monte Carlo algorithm for correlated Bose fluids.

    PubMed

    Miura, Shinichi; Tanaka, Junji

    2004-02-01

    Path integral hybrid Monte Carlo (PIHMC) algorithm for strongly correlated Bose fluids has been developed. This is an extended version of our previous method [S. Miura and S. Okazaki, Chem. Phys. Lett. 308, 115 (1999)] applied to a model system consisting of noninteracting bosons. Our PIHMC method for the correlated Bose fluids is constituted of two trial moves to sample path-variables describing system coordinates along imaginary time and a permutation of particle labels giving a boundary condition with respect to imaginary time. The path-variables for a given permutation are generated by a hybrid Monte Carlo method based on path integral molecular dynamics techniques. Equations of motion for the path-variables are formulated on the basis of a collective coordinate representation of the path, staging variables, to enhance the sampling efficiency. The permutation sampling to satisfy Bose-Einstein statistics is performed using the multilevel Metropolis method developed by Ceperley and Pollock [Phys. Rev. Lett. 56, 351 (1986)]. Our PIHMC method has successfully been applied to liquid helium-4 at a state point where the system is in a superfluid phase. Parameters determining the sampling efficiency are optimized in such a way that correlation among successive PIHMC steps is minimized. PMID:15268354

  14. Path integral Monte Carlo on a lattice. II. Bound states

    NASA Astrophysics Data System (ADS)

    O'Callaghan, Mark; Miller, Bruce N.

    2016-07-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and the atoms are restricted to sites on a one-dimensional lattice. A path integral formalism developed within the context of the canonical ensemble is utilized, where the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp in the context of a square well potential, forcing the qp to occupy bound states. We consider a one-dimensional square well potential where all atoms on the lattice are occupied with one atom with an on-site potential except for a contiguous set of sites of various lengths centered at the middle of the lattice. Comparison of the potential energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the numerical calculations.

  15. Path integral Monte Carlo on a lattice. II. Bound states.

    PubMed

    O'Callaghan, Mark; Miller, Bruce N

    2016-07-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and the atoms are restricted to sites on a one-dimensional lattice. A path integral formalism developed within the context of the canonical ensemble is utilized, where the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp in the context of a square well potential, forcing the qp to occupy bound states. We consider a one-dimensional square well potential where all atoms on the lattice are occupied with one atom with an on-site potential except for a contiguous set of sites of various lengths centered at the middle of the lattice. Comparison of the potential energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the numerical calculations. PMID:27575090

  16. Monte Carlo Simulations of Background Spectra in Integral Imager Detectors

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.

    1998-01-01

    Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.

  17. A Preliminary Study of In-House Monte Carlo Simulations: An Integrated Monte Carlo Verification System

    SciTech Connect

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  18. Streamlining resummed QCD calculations using Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Farhi, David; Feige, Ilya; Freytsis, Marat; Schwartz, Matthew D.

    2016-08-01

    Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MadGraph [1], Alpgen [2] or Sherpa [3]. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution. These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including e + e - two- and four-jet event shapes, n-jettiness and jet-mass related observables at hadron colliders at next-to-leading-log (NLL) matched to leading order (LO). Attached code can be used to modify MadGraph to export the relevant LO hard functions and color structures for arbitrary processes.

  19. Initial evaluation of Centroidal Voronoi Tessellation method for statistical sampling and function integration.

    SciTech Connect

    Romero, Vicente Jose; Peterson, Janet S.; Burkhardt, John V.; Gunzburger, Max Donald

    2003-09-01

    A recently developed Centroidal Voronoi Tessellation (CVT) unstructured sampling method is investigated here to assess its suitability for use in statistical sampling and function integration. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-Dimensional parameter spaces. It has recently been shown on several 2-D test problems to provide superior point distributions for generating locally conforming response surfaces. In this paper, its performance as a statistical sampling and function integration method is compared to that of Latin-Hypercube Sampling (LHS) and Simple Random Sampling (SRS) Monte Carlo methods, and Halton and Hammersley quasi-Monte-Carlo sequence methods. Specifically, sampling efficiencies are compared for function integration and for resolving various statistics of response in a 2-D test problem. It is found that on balance CVT performs best of all these sampling methods on our test problems.

  20. Path-integral Monte Carlo study of asymmetric quantum quadrupolar rotors with fourth-order propagators

    NASA Astrophysics Data System (ADS)

    Park, Sungjin; Shin, Hyeondeok; Kwon, Yongkyung

    2012-08-01

    The recently-proposed fourth-order propagator based on the multi-product expansion has been applied to path-integral Monte Carlo calculations for asymmetric quantum quadruploar rotors fixed at face-centered cubic lattice sites. The rotors are observed to undergo an orientational orderdisorder phase transition at a low temperature when the electric quadrupole-quadrupole interaction is strong enough. At intermediate interaction strength, a further decrease of temperature after the first transition to the ordered phase results in a reentrant transition back to the disordered phase. The theoretical phase diagram of these asymmetric rotors determined by using fourth-order path-integral Monte Carlo calculations is found to be in good quantitative agreement with the experimental one for solid hydrogen deuteride. This leads us to conclude that the fourth-order propagator can be effectively implemented for an accurate path-integral Monte Carlo calculation of a quantum many-body system with rotational degrees of freedom.

  1. Solution of the Bartels-Kwiecinski-Praszalowicz equation via Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Chachamis, Grigorios; Sabio Vera, Agustín

    2016-08-01

    We present a method of solution of the Bartels-Kwiecinski-Praszalowicz (BKP) equation based on the numerical integration of iterated integrals in transverse momentum and rapidity space. As an application, our procedure, which makes use of Monte Carlo integration techniques, is applied to obtain the gluon Green function in the Odderon case at leading order. The same approach can be used for more complicated scenarios.

  2. First- and second-order error estimates in Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Bakx, R.; Kleiss, R. H. P.; Versteegen, F.

    2016-11-01

    In Monte Carlo integration an accurate and reliable determination of the numerical integration error is essential. We point out the need for an independent estimate of the error on this error, for which we present an unbiased estimator. In contrast to the usual (first-order) error estimator, this second-order estimator can be shown to be not necessarily positive in an actual Monte Carlo computation. We propose an alternative and indicate how this can be computed in linear time without risk of large rounding errors. In addition, we comment on the relatively very slow convergence of the second-order error estimate.

  3. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    SciTech Connect

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

  4. Testing and tuning symplectic integrators for the hybrid Monte Carlo algorithm in lattice QCD

    SciTech Connect

    Takaishi, Tetsuya; Forcrand, Philippe de

    2006-03-15

    We examine a new second-order integrator recently found by Omelyan et al. The integration error of the new integrator measured in the root mean square of the energy difference, <{delta}H{sup 2}>{sup 1/2}, is about 10 times smaller than that of the standard second-order leapfrog (2LF) integrator. As a result, the step size of the new integrator can be made about three times larger. Taking into account a factor 2 increase in cost, the new integrator is about 50% more efficient than the 2LF integrator. Integrating over positions first, then momenta, is slightly more advantageous than the reverse. Further parameter tuning is possible. We find that the optimal parameter for the new integrator is slightly different from the value obtained by Omelyan et al., and depends on the simulation parameters. This integrator could also be advantageous for the Trotter-Suzuki decomposition in quantum Monte Carlo.

  5. On the ground state calculation of a many-body system using a self-consistent basis and quasi-Monte Carlo: An application to water hexamer

    NASA Astrophysics Data System (ADS)

    Georgescu, Ionuţ; Jitomirskaya, Svetlana; Mandelshtam, Vladimir A.

    2013-11-01

    Given a quantum many-body system, the Self-Consistent Phonons (SCP) method provides an optimal harmonic approximation by minimizing the free energy. In particular, the SCP estimate for the vibrational ground state (zero temperature) appears to be surprisingly accurate. We explore the possibility of going beyond the SCP approximation by considering the system Hamiltonian evaluated in the harmonic eigenbasis of the SCP Hamiltonian. It appears that the SCP ground state is already uncoupled to all singly- and doubly-excited basis functions. So, in order to improve the SCP result at least triply-excited states must be included, which then reduces the error in the ground state estimate substantially. For a multidimensional system two numerical challenges arise, namely, evaluation of the potential energy matrix elements in the harmonic basis, and handling and diagonalizing the resulting Hamiltonian matrix, whose size grows rapidly with the dimensionality of the system. Using the example of water hexamer we demonstrate that such calculation is feasible, i.e., constructing and diagonalizing the Hamiltonian matrix in a triply-excited SCP basis, without any additional assumptions or approximations. Our results indicate particularly that the ground state energy differences between different isomers (e.g., cage and prism) of water hexamer are already quite accurate within the SCP approximation.

  6. On the ground state calculation of a many-body system using a self-consistent basis and quasi-Monte Carlo: An application to water hexamer

    SciTech Connect

    Georgescu, Ionuţ Mandelshtam, Vladimir A.; Jitomirskaya, Svetlana

    2013-11-28

    Given a quantum many-body system, the Self-Consistent Phonons (SCP) method provides an optimal harmonic approximation by minimizing the free energy. In particular, the SCP estimate for the vibrational ground state (zero temperature) appears to be surprisingly accurate. We explore the possibility of going beyond the SCP approximation by considering the system Hamiltonian evaluated in the harmonic eigenbasis of the SCP Hamiltonian. It appears that the SCP ground state is already uncoupled to all singly- and doubly-excited basis functions. So, in order to improve the SCP result at least triply-excited states must be included, which then reduces the error in the ground state estimate substantially. For a multidimensional system two numerical challenges arise, namely, evaluation of the potential energy matrix elements in the harmonic basis, and handling and diagonalizing the resulting Hamiltonian matrix, whose size grows rapidly with the dimensionality of the system. Using the example of water hexamer we demonstrate that such calculation is feasible, i.e., constructing and diagonalizing the Hamiltonian matrix in a triply-excited SCP basis, without any additional assumptions or approximations. Our results indicate particularly that the ground state energy differences between different isomers (e.g., cage and prism) of water hexamer are already quite accurate within the SCP approximation.

  7. First Results From GLAST-LAT Integrated Towers Cosmic Ray Data Taking And Monte Carlo Comparison

    SciTech Connect

    Brigida, M.; Caliandro, A.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giordano, F.; Giglietto, N.; Loparco, F.; Marangelli, B.; Mazziotta, M.N.; Mirizzi, N.; Raino, S.; Spinelli, P.; /Bari U. /INFN, Bari

    2007-02-15

    GLAST Large Area Telescope (LAT) is a gamma ray telescope instrumented with silicon-strip detector planes and sheets of converter, followed by a calorimeter (CAL) and surrounded by an anticoincidence system (ACD). This instrument is sensitive to gamma rays in the energy range between 20 MeV and 300 GeV. At present, the first towers have been integrated and pre-launch data taking with cosmic ray muons is being performed. The results from the data analysis carried out during LAT integration will be discussed and a comparison with the predictions from the Monte Carlo simulation will be shown.

  8. Quantum Mechanical Single Molecule Partition Function from PathIntegral Monte Carlo Simulations

    SciTech Connect

    Chempath, Shaji; Bell, Alexis T.; Predescu, Cristian

    2006-10-01

    An algorithm for calculating the partition function of a molecule with the path integral Monte Carlo method is presented. Staged thermodynamic perturbation with respect to a reference harmonic potential is utilized to evaluate the ratio of partition functions. Parallel tempering and a new Monte Carlo estimator for the ratio of partition functions are implemented here to achieve well converged simulations that give an accuracy of 0.04 kcal/mol in the reported free energies. The method is applied to various test systems, including a catalytic system composed of 18 atoms. Absolute free energies calculated by this method lead to corrections as large as 2.6 kcal/mol at 300 K for some of the examples presented.

  9. Path-integral Monte Carlo method for Rényi entanglement entropies.

    PubMed

    Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A

    2014-07-01

    We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.

  10. Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

    2007-01-01

    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

  11. Imaginary time path integral Monte Carlo route to rate coefficients for nonadiabatic barrier crossing

    NASA Technical Reports Server (NTRS)

    Wolynes, Peter G.

    1987-01-01

    Nonadiabatic transitions are central to many areas of chemical and condensed matter physics, ranging from biological electron transfer to the optical properties of one-dimensional conductors. Here, a path integral Monte Carlo method is used to simulate such transitions, based on the observation that nonadiabatic rate coefficients are often dominated by saddle point trajectories that correspond to an imaginary time. Simple analytic theories can be used to continue these imaginary time correlation functions to determine rate coefficients. The advantages and drawbacks of this approach are discussed.

  12. Data assimilation using a GPU accelerated path integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Quinn, John C.; Abarbanel, Henry D. I.

    2011-09-01

    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.

  13. CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC

    NASA Astrophysics Data System (ADS)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2014-06-01

    Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.

  14. Path-integral Monte Carlo simulation of the second layer of 4He adsorbed on graphite

    NASA Astrophysics Data System (ADS)

    Pierce, Marlon; Manousakis, Efstratios

    1999-02-01

    We have developed a path-integral Monte Carlo method for simulating helium films and apply it to the second layer of helium adsorbed on graphite. We use helium-helium and helium-graphite interactions that are found from potentials which realistically describe the interatomic interactions. The Monte Carlo sampling is over both particle positions and permutations of particle labels. From the particle configurations and static structure factor calculations, we find that this layer possesses, in order of increasing density, a superfluid liquid phase, a 7×7 commensurate solid phase that is registered with respect to the first layer, and an incommensurate solid phase. By applying the Maxwell construction to the dependence of the low-temperature total energy on the coverage, we are able to identify coexistence regions between the phases. From these, we deduce an effectively zero-temperature phase diagram. Our phase boundaries are in agreement with heat capacity and torsional oscillator measurements, and demonstrate that the experimentally observed disruption of the superfluid phase is caused by the growth of the commensurate phase. We further observe that the superfluid phase has a transition temperature consistent with the two-dimensional value. Promotion to the third layer occurs for densities above 0.212 atom/Å 2, in good agreement with experiment. Finally, we calculate the specific heat for each phase and obtain peaks at temperatures in general agreement with experiment.

  15. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    SciTech Connect

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  16. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    SciTech Connect

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-01

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexciton binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.

  17. Integrated layout based Monte-Carlo simulation for design arc optimization

    NASA Astrophysics Data System (ADS)

    Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James

    2016-03-01

    Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533

  18. Torsional path integral Monte Carlo method for the quantum simulation of large molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F.; Clary, David C.

    2002-05-01

    A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.

  19. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    DOE PAGESBeta

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-01

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmore » binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.« less

  20. WORM ALGORITHM PATH INTEGRAL MONTE CARLO APPLIED TO THE 3He-4He II SANDWICH SYSTEM

    NASA Astrophysics Data System (ADS)

    Al-Oqali, Amer; Sakhel, Asaad R.; Ghassib, Humam B.; Sakhel, Roger R.

    2012-12-01

    We present a numerical investigation of the thermal and structural properties of the 3He-4He sandwich system adsorbed on a graphite substrate using the worm algorithm path integral Monte Carlo (WAPIMC) method [M. Boninsegni, N. Prokof'ev and B. Svistunov, Phys. Rev. E74, 036701 (2006)]. For this purpose, we have modified a previously written WAPIMC code originally adapted for 4He on graphite, by including the second 3He-component. To describe the fermions, a temperature-dependent statistical potential has been used. This has proven very effective. The WAPIMC calculations have been conducted in the millikelvin temperature regime. However, because of the heavy computations involved, only 30, 40 and 50 mK have been considered for the time being. The pair correlations, Matsubara Green's function, structure factor, and density profiles have been explored at these temperatures.

  1. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    SciTech Connect

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-11

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexciton binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.

  2. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  3. Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV

    SciTech Connect

    Miller, S.G.

    1988-08-01

    Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.

  4. Fractional volume integration in two-dimensional NMR spectra: CAKE, a Monte Carlo approach.

    PubMed

    Romano, Rocco; Paris, Debora; Acernese, Fausto; Barone, Fabrizio; Motta, Andrea

    2008-06-01

    Quantitative information from multi-dimensional NMR experiments can be obtained by peak volume integration. The standard procedure (selection of a region around the chosen peak and addition of all values) is often biased by poor peak definition because of peak overlap. Here we describe a simple method, called CAKE, for volume integration of (partially) overlapping peaks. Assuming the axial symmetry of two-dimensional NMR peaks, as it occurs in NOESY and TOCSY when Lorentz-Gauss transformation of the signals is carried out, CAKE estimates the peak volume by multiplying a volume fraction by a factor R. It represents a proportionality ratio between the total and the fractional volume, which is identified as a slice in an exposed region of the overlapping peaks. The volume fraction is obtained via Monte Carlo Hit-or-Miss technique, which proved to be the most efficient because of the small region and the limited number of points within the selected area. Tests on simulated and experimental peaks, with different degrees of overlap and signal-to-noise ratios, show that CAKE results in improved volume estimates. A main advantage of CAKE is that the volume fraction can be flexibly chosen so as to minimize the effect of overlap, frequently observed in two-dimensional spectra. PMID:18396078

  5. Kinetic isotope effect in malonaldehyde determined from path integral Monte Carlo simulations.

    PubMed

    Huang, Jing; Buchowiecki, Marcin; Nagy, Tibor; Vaníček, Jiří; Meuwly, Markus

    2014-01-01

    The primary H/D kinetic isotope effect on the intramolecular proton transfer in malonaldehyde is determined from quantum instanton path integral Monte Carlo simulations on a fully dimensional and validated potential energy surface for temperatures between 250 and 1500 K. Our calculations, based on thermodynamic integration with respect to the mass of the transferring particle, are significantly accelerated by the direct evaluation of the kinetic isotope effect instead of computing it as a ratio of two rate constants. At room temperature, the KIE from the present simulations is 5.2 ± 0.4. The KIE is found to vary considerably as a function of temperature and the low-T behaviour is dominated by the fact that the free energy derivative in the reactant state increases more rapidly than in the transition state. Detailed analysis of the various contributions to the quantum rate constant together with estimates for rates from conventional transition state theory and from periodic orbit theory suggest that the KIE in malonaldehyde is dominated by zero point energy effects and that tunneling plays a minor role at room temperature.

  6. Fractional volume integration in two-dimensional NMR spectra: CAKE, a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Romano, Rocco; Paris, Debora; Acernese, Fausto; Barone, Fabrizio; Motta, Andrea

    2008-06-01

    Quantitative information from multi-dimensional NMR experiments can be obtained by peak volume integration. The standard procedure (selection of a region around the chosen peak and addition of all values) is often biased by poor peak definition because of peak overlap. Here we describe a simple method, called CAKE, for volume integration of (partially) overlapping peaks. Assuming the axial symmetry of two-dimensional NMR peaks, as it occurs in NOESY and TOCSY when Lorentz-Gauss transformation of the signals is carried out, CAKE estimates the peak volume by multiplying a volume fraction by a factor R. It represents a proportionality ratio between the total and the fractional volume, which is identified as a slice in an exposed region of the overlapping peaks. The volume fraction is obtained via Monte Carlo Hit-or-Miss technique, which proved to be the most efficient because of the small region and the limited number of points within the selected area. Tests on simulated and experimental peaks, with different degrees of overlap and signal-to-noise ratios, show that CAKE results in improved volume estimates. A main advantage of CAKE is that the volume fraction can be flexibly chosen so as to minimize the effect of overlap, frequently observed in two-dimensional spectra.

  7. Fractional volume integration in two-dimensional NMR spectra: CAKE, a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Romano, Rocco; Acernese, Fausto; Paris, Debora; Motta, Andrea; Barone, Fabrizio

    2009-03-01

    Quantitative information from multidimensional NMR experiments can be obtained by peak volume integration. The standard procedure (selection of a region around the chosen peak and addition of all values) is often biased by poor peak definition because of peak overlap. Here we describe a simple method, called CAKE, for volume integration of (partially) overlapping peaks. Assuming the axial symmetry of two-dimensional NMR peaks, as it occurs in NOESY and TOCSY when Lorentz-Gauss transformation of the signals is carried out, CAKE estimates the peak volume by multiplying a volume fraction by a factor R. It represents a proportionality ratio between the total and the fractional volume, which is identified as a slice in an exposed region of the overlapping peaks. The volume fraction is obtained via Monte Carlo Hit-or-Miss technique, which proved to be the most efficient because of the small region and the limited number of points within the selected area. Tests on simulated and experimental peaks, with different degrees of overlap and signal-to-noise ratios, show that CAKE results in improved volume estimates. A main advantage of CAKE is that the volume fraction can be flexibly chosen so as to minimize the effect of overlap, frequently observed in two-dimensional spectra.

  8. Deposition of Colloidal Particles on Homogeneous Surfaces: Integral-Equation Theory and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Danwanichakul, Panu

    2009-01-01

    Deposition of large particles such as colloidal or bio-particles on a solid surface is usually modeled by the random sequential adsorption (RSA). The model was previously described by the integral-equation theory whose validity was proved by Monte Carlo simulation. This work generalized the model to include the concentration effect of added particles on the surface. The fraction of particles inserted was varied by the reduced number density of 0.05, 0.1, and 0.2. It was found that the modified integral-equation theory yielded the results in good accordance with the simulation. Regarding colloidal particles as hard spheres, when the fraction of particles added was increased, the radial distribution function has higher peak, due to the cooperative and entropic effects. This work could bridge the gap between equilibrium adsorption, where all particles may be considered moving and RSA, where there is no moving particle on the surface. In addition, the effect of attractive interaction was also incorporated and it was found that increasing number of added particles at one time yields less values of the radial distribution function.

  9. Equation of state of an interacting Bose gas at finite temperature: A path-integral Monte Carlo study

    SciTech Connect

    Pilati, S.; Giorgini, S.; Sakkos, K.; Boronat, J.; Casulleras, J.

    2006-10-15

    By using exact path-integral Monte Carlo methods we calculate the equation of state of an interacting Bose gas as a function of temperature both below and above the superfluid transition. The universal character of the equation of state for dilute systems and low temperatures is investigated by modeling the interatomic interactions using different repulsive potentials corresponding to the same s-wave scattering length. The results obtained for the energy and the pressure are compared to the virial expansion for temperatures larger than the critical temperature. At very low temperatures we find agreement with the ground-state energy calculated using the diffusion Monte Carlo method.

  10. Monte Carlo solution of the volume-integral equation of electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Peltoniemi, J.; Muinonen, K.

    2014-07-01

    Electromagnetic scattering is often the main physical process to be understood when interpreting the observations of asteroids, comets, and meteors. Modeling the scattering faces still many problems, and one needs to assess several different cases: multiple scattering and shadowing by the rough surface, multiple scattering inside a surface element, and single scattering by a small object. Our specific goal is to extend the electromagnetic techniques to larger and more complicated objects, and derive approximations taking into account the most important effects of waves. Here we experiment with Monte Carlo techniques: can they provide something new to solving the scattering problems? The electromagnetic wave equation in the presence of a scatterer of volume V and refractive index m, with an incident wave EE_0, including boundary conditions and the scattering condition at infinity, can be presented in the form of an integral equation EE(rr)(1+suski(rr) Q(ρ))-int_{V-V_ρ}ddrr' GG(rr-rr')suski(rr')EE(rr') =EE_0, where suski(rr)=m(rr)^2-1, Q(ρ)=-1/3+{cal O}(ρ^2)+{O'}(m^2ρ^2), {O}, and {O'} are some second- and higher-order corrections for the finite-size volume V_ρ of radius ρ around the singularity and GG is the dyadic Green's function of the form GG(RR)={exp(im kR)}/{4π R}[unittensor(1+{im}/{R}-{1}/{R^2})-RRRR(1+{3im}/{R}-{3}/{R^2})]. In general, this is solved by extending the internal field in terms of some simple basis functions, e.g., plane or spherical waves or a cubic grid, approximating the integrals in a clever way, and determining the goodness of the solution somehow, e.g., moments or least square. Whatever the choice, the solution usually converges nicely towards a correct enough solution when the scatterer is small and simple, and diverges when the scatterer becomes too complicated. With certain methods, one can reach larger scatterers faster, but the memory and CPU needs can be huge. Until today, all successful solutions are based on more or less

  11. Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.

    SciTech Connect

    VALDEZ, GREG D.

    2012-11-30

    Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  12. Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.

    2012-11-30

    Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects onemore » of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less

  13. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  14. Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419

    SciTech Connect

    Hulett, David T.; Nosbisch, Michael R.

    2012-07-01

    This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds

  15. Selecting an Appropriate Multiple Comparison Technique: An Integration of Monte Carlo Studies.

    ERIC Educational Resources Information Center

    Myette, Beverly M.; White, Karl R.

    Twenty Monte Carlo studies on multiple comparison (MC) techniques were conducted to examine which MC technique was the "method of choice." The results from these studies had several apparent contradictions when different techniques were investigated under varying sample size and variance conditions. Box's coefficient of variance variation and bias…

  16. Effects of self-seeding and crystal post-selection on the quality of Monte Carlo-integrated SFX data.

    PubMed

    Barends, Thomas; White, Thomas A; Barty, Anton; Foucar, Lutz; Messerschmidt, Marc; Alonso-Mori, Roberto; Botha, Sabine; Chapman, Henry; Doak, R Bruce; Galli, Lorenzo; Gati, Cornelius; Gutmann, Matthias; Koglin, Jason; Markvardsen, Anders; Nass, Karol; Oberthur, Dominik; Shoeman, Robert L; Schlichting, Ilme; Boutet, Sébastien

    2015-05-01

    Serial femtosecond crystallography (SFX) is an emerging method for data collection at free-electron lasers (FELs) in which single diffraction snapshots are taken from a large number of crystals. The partial intensities collected in this way are then combined in a scheme called Monte Carlo integration, which provides the full diffraction intensities. However, apart from having to perform this merging, the Monte Carlo integration must also average out all variations in crystal quality, crystal size, X-ray beam properties and other factors, necessitating data collection from thousands of crystals. Because the pulses provided by FELs running in the typical self-amplified spontaneous emission (SASE) mode of operation have very irregular, spiky spectra that vary strongly from pulse to pulse, it has been suggested that this is an important source of variation contributing to inaccuracies in the intensities, and that, by using monochromatic pulses produced through a process called self-seeding, fewer images might be needed for Monte Carlo integration to converge, resulting in more accurate data. This paper reports the results of two experiments performed at the Linac Coherent Light Source in which data collected in both SASE and self-seeded mode were compared. Importantly, no improvement attributable to the use of self-seeding was detected. In addition, other possible sources of variation that affect SFX data quality were investigated, such as crystal-to-crystal variations reflected in the unit-cell parameters; however, these factors were found to have no influence on data quality either. Possibly, there is another source of variation as yet undetected that affects SFX data quality much more than any of the factors investigated here.

  17. Effects of self-seeding and crystal post-selection on the quality of Monte Carlo-integrated SFX data.

    PubMed

    Barends, Thomas; White, Thomas A; Barty, Anton; Foucar, Lutz; Messerschmidt, Marc; Alonso-Mori, Roberto; Botha, Sabine; Chapman, Henry; Doak, R Bruce; Galli, Lorenzo; Gati, Cornelius; Gutmann, Matthias; Koglin, Jason; Markvardsen, Anders; Nass, Karol; Oberthur, Dominik; Shoeman, Robert L; Schlichting, Ilme; Boutet, Sébastien

    2015-05-01

    Serial femtosecond crystallography (SFX) is an emerging method for data collection at free-electron lasers (FELs) in which single diffraction snapshots are taken from a large number of crystals. The partial intensities collected in this way are then combined in a scheme called Monte Carlo integration, which provides the full diffraction intensities. However, apart from having to perform this merging, the Monte Carlo integration must also average out all variations in crystal quality, crystal size, X-ray beam properties and other factors, necessitating data collection from thousands of crystals. Because the pulses provided by FELs running in the typical self-amplified spontaneous emission (SASE) mode of operation have very irregular, spiky spectra that vary strongly from pulse to pulse, it has been suggested that this is an important source of variation contributing to inaccuracies in the intensities, and that, by using monochromatic pulses produced through a process called self-seeding, fewer images might be needed for Monte Carlo integration to converge, resulting in more accurate data. This paper reports the results of two experiments performed at the Linac Coherent Light Source in which data collected in both SASE and self-seeded mode were compared. Importantly, no improvement attributable to the use of self-seeding was detected. In addition, other possible sources of variation that affect SFX data quality were investigated, such as crystal-to-crystal variations reflected in the unit-cell parameters; however, these factors were found to have no influence on data quality either. Possibly, there is another source of variation as yet undetected that affects SFX data quality much more than any of the factors investigated here. PMID:25931080

  18. Optimization strategy integrity for watershed agricultural non-point source pollution control based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Gong, Y.; Yu, Y. J.; Zhang, W. Y.

    2016-08-01

    This study has established a set of methodological systems by simulating loads and analyzing optimization strategy integrity for the optimization of watershed non-point source pollution control. First, the source of watershed agricultural non-point source pollution is divided into four aspects, including agricultural land, natural land, livestock breeding, and rural residential land. Secondly, different pollution control measures at the source, midway and ending stages are chosen. Thirdly, the optimization effect of pollution load control in three stages are simulated, based on the Monte Carlo simulation. The method described above is applied to the Ashi River watershed in Heilongjiang Province of China. Case study results indicate that the combined three types of control measures can be implemented only if the government promotes the optimized plan and gradually improves implementation efficiency. This method for the optimization strategy integrity for watershed non-point source pollution control has significant reference value.

  19. Mercedes-Benz water molecules near hydrophobic wall: integral equation theories vs Monte Carlo simulations.

    PubMed

    Urbic, T; Holovko, M F

    2011-10-01

    Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.

  20. Mercedes–Benz water molecules near hydrophobic wall: Integral equation theories vs Monte Carlo simulations

    PubMed Central

    Urbic, T.; Holovko, M. F.

    2011-01-01

    Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes–Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334

  1. Mercedes-Benz water molecules near hydrophobic wall: Integral equation theories vs Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Urbic, T.; Holovko, M. F.

    2011-10-01

    Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.

  2. Mercedes-Benz water molecules near hydrophobic wall: integral equation theories vs Monte Carlo simulations.

    PubMed

    Urbic, T; Holovko, M F

    2011-10-01

    Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334

  3. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  4. Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Bardenet, Rémi

    2013-07-01

    Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  5. Development of Path Integral Monte Carlo Simulations with Localized Nodal Surfaces for Second-Row Elements.

    PubMed

    Militzer, Burkhard; Driver, Kevin P

    2015-10-23

    We extend the applicability range of fermionic path integral Monte Carlo simulations to heavier elements and lower temperatures by introducing various localized nodal surfaces. Hartree-Fock nodes yield the most accurate prediction for pressure and internal energy, which we combine with the results from density functional molecular dynamics simulations to obtain a consistent equation of state for hot, dense silicon under plasma conditions and in the regime of warm dense matter (2.3-18.6  g cm(-3), 5.0×10(5)-1.3×10(8)  K). The shock Hugoniot curve is derived and the structure of the fluid is characterized with various pair correlation functions. PMID:26551129

  6. Path-Integral Monte Carlo Study on a Droplet of a Dipolar Bose–Einstein Condensate Stabilized by Quantum Fluctuation

    NASA Astrophysics Data System (ADS)

    Saito, Hiroki

    2016-05-01

    Motivated by recent experiments [H. Kadau et al., Nature (London) 530, 194 (2016); I. Ferrier-Barbut et al., arXiv:1601.03318] and theoretical prediction (F. Wächtler and L. Santos, arXiv:1601.04501), the ground state of a dysprosium Bose-Einstein condensate with strong dipole-dipole interaction is studied by the path-integral Monte Carlo method. It is shown that quantum fluctuation can stabilize the condensate against dipolar collapse.

  7. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  8. Monte Carlo ray-tracing simulations of luminescent solar concentrators for building integrated photovoltaics

    NASA Astrophysics Data System (ADS)

    Leow, Shin Woei; Corrado, Carley; Osborn, Melissa; Carter, Sue A.

    2013-09-01

    Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles, concentrating the captured light onto small photo active areas. This enables greater incorporation of LSCs into building designs as windows, skylights and wall claddings in addition to rooftop installations of current solar panels. Using relatively cheap luminescent dyes and acrylic waveguides to effect light concentration onto lesser photovoltaic (PV) cells, there is potential for this technology to approach grid price parity. We employ a panel design in which the front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. This also allows for flexibility in determining the placement and percentage coverage of PV cells during the design process to balance reabsorption losses against the power output and level of light concentration desired. To aid in design optimization, a Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters with interactions of photons in the panel determined by comparing calculated probabilities with random number generators. LSC panels with multiple dyes or layers can also be simulated. Analysis of the results reveals optimal panel dimensions and PV cell layouts for maximum power output for a given dye concentration, absorbtion/emission spectrum and quantum efficiency.

  9. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    SciTech Connect

    Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal

    2014-05-12

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.

  10. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Shang, Yu; Li, Ting; Chen, Lei; Lin, Yu; Toborek, Michal; Yu, Guoqiang

    2014-05-01

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αDB) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αDB. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αDB (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: -5.3% to -18.0%) for different tissue models. Although adding random noises to DCS data resulted in αDB variations, the mean values of errors in extracting αDB were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αDB using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.

  11. Quantum partition functions of composite particles in a hydrogen-helium plasma via path integral Monte Carlo

    SciTech Connect

    Wendland, D.; Ballenegger, V.; Alastuey, A.

    2014-11-14

    We compute two- and three-body cluster functions that describe contributions of composite entities, like hydrogen atoms, ions H{sup −}, H{sub 2}{sup +}, and helium atoms, and also charge-charge and atom-charge interactions, to the equation of state of a hydrogen-helium mixture at low density. A cluster function has the structure of a truncated virial coefficient and behaves, at low temperatures, like a usual partition function for the composite entity. Our path integral Monte Carlo calculations use importance sampling to sample efficiently the cluster partition functions even at low temperatures where bound state contributions dominate. We also employ a new and efficient adaptive discretization scheme that allows one not only to eliminate Coulomb divergencies in discretized path integrals, but also to direct the computational effort where particles are close and thus strongly interacting. The numerical results for the two-body function agree with the analytically known quantum second virial coefficient. The three-body cluster functions are compared at low temperatures with familiar partition functions for composite entities.

  12. Variational Perturbation Theory Path Integral Monte Carlo (VPT-PIMC): Trial Path Optimization Approach for Warm Dense Matter

    NASA Astrophysics Data System (ADS)

    Belof, Jonathan; Dubois, Jonathan

    2013-06-01

    Warm dense matter (WDM), the regime of degenerate and strongly coupled Coulomb systems, is of great interest due to it's importance in understanding astrophysical processes and high energy density laboratory experiments. Path Integral Monte Carlo (PIMC) presents a particularly attractive formalism for tackling outstanding questions in WDM, in that electron correlation can be calculated exactly, with the nuclear and electronic degrees of freedom on equal footing. Here we present an efficient means of solving the Feynman path integral numerically by variational optimization of a trial density matrix, a method originally proposed for simple potentials by Feynman and Kleinert, and we show that this formalism provides an accurate description of warm dense matter with a number of unique advantages over other PIMC approaches. An exchange interaction term is derived for the variationally optimized path, as well as a numerically efficient scheme for dealing with long-range electrostatics. Finally, we present results for the pair correlation functions and thermodynamic observables of the spin polarized electron gas, warm dense hydrogen and all-electron warm dense carbon within the presented VPT-PIMC formalism. Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344.

  13. Finite temperature path integral Monte Carlo simulations of structural and dynamical properties of Ar(N)-CO2 clusters.

    PubMed

    Wang, Lecheng; Xie, Daiqian

    2012-08-21

    We report finite temperature quantum mechanical simulations of structural and dynamical properties of Ar(N)-CO(2) clusters using a path integral Monte Carlo algorithm. The simulations are based on a newly developed analytical Ar-CO(2) interaction potential obtained by fitting ab initio results to an anisotropic two-dimensional Morse/Long-range function. The calculated distributions of argon atoms around the CO(2) molecule in Ar(N)-CO(2) clusters with different sizes are consistent to the previous studies of the configurations of the clusters. A first-order perturbation theory is used to quantitatively predict the CO(2) vibrational frequency shift in different clusters. The first-solvation shell is completed at N = 17. Interestingly, our simulations for larger Ar(N)-CO(2) clusters showed several different structures of the argon shell around the doped CO(2) molecule. The observed two distinct peaks (2338.8 and 2344.5 cm(-1)) in the υ(3) band of CO(2) may be due to the different arrangements of argon atoms around the dopant molecule.

  14. Path integral Monte Carlo study of hydrogen tunneling effect on dielectric properties of molecular crystal 5-Bromo-9-hydroxyphenalenone

    NASA Astrophysics Data System (ADS)

    Otaki, Hiroki; Ando, Koji

    2015-01-01

    The dielectric properties of proton(H)-deuteron(D) mixed crystals of the hydrogen-bonded material 5-Bromo-9-hydroxyphenalenone are studied using a novel path integral Monte Carlo (PIMC) method that takes account of the dipole induction effect depending on the relative proton configurations in the surrounding molecules. The induced dipole is evaluated using the fragment molecular orbital method with electron correlation included by second-order Møller-Plesset perturbation theory and long-range corrected density functional theory. The results show a greater influence of Csbnd H ⋯O intermolecular weak hydrogen bonding on the induction than for results evaluated with the Hartree-Fock method. The induction correction is incorporated into the PIMC simulations with a model Hamiltonian that consists of long-range dipolar interactions and a transverse term describing proton tunneling. The relationship between the calculated phase transition temperature and H/D mixing ratio is consistent with the experimental phase diagram, indicating that the balance between the proton tunneling and the collective ordering is appropriately described.

  15. Monte Carlo Simulations of Luminescent Solar Concentrators with Front-Facing Photovoltaic Cells for Building Integrated Photovoltaics

    NASA Astrophysics Data System (ADS)

    Leow, Shin; Corrado, Carley; Osborn, Melissa; Carter, Sue

    2013-03-01

    Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles and concentrate the captured light on to small photo active areas. This enables LSCs to be integrated more extensively into buildings as windows and wall claddings on top of roof installations. LSCs with front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. It also allows for flexibility in determining the placement and percentage coverage of PV cells when designing panels to balance reabsorption losses, power output and the level of concentration desired. A Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels and aid in design optimization. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters. Interactions of photons with the LSC panel are determined by comparing calculated probabilities with random number generators. Simulation results reveal optimal panel dimensions and PV cell layouts to achieve maximum power output.

  16. Is there a stable commensurate solid phase in the second 4He layer on graphite? - path integral Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Ahn, Jeonghwan; Lee, Hoonkyung; Kwon, Yongkyung

    2015-03-01

    Existence of a stable commensurate structure in the second 4He layer on graphite has been a subject of intensive experimental and theoretical studies because of its implication in the possible realization of two-dimensional supersolidity. Earlier path-integral Monte Carlo (PIMC) calculations of Pierce and Manousakis predicted a stable C4/7 commensurate structure above the first-layer 4He atoms fixed at triangular lattice sites, but Corboz et al. later showed that no commensurate phase was stable when quantum dynamics of the first-layer 4He atoms was incorporated in the PIMC calculations. On the other hand, recent heat capacity measurements of Nakamura et al. provided a strong evidence for a commensurate solid in the second 4He layer over an extended density range. Motivated by this, we have performed new PIMC calculations for the second helium layer on graphite. Unlike previous PIMC calculations where a laterally-averaged one-dimensional substrate potential was used, we here employ an anisotropic 4He-graphite potential described by a sum of the 4He-C pair potentials. With this fully-corrugated substrate potential we make more accurate description of quantum dynamics of the first-layer 4He atoms and analyze its effects on the phase diagram of the second layer.

  17. Accurate determination of the Gibbs energy of Cu-Zr melts using the thermodynamic integration method in Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Harvey, J.-P.; Gheribi, A. E.; Chartrand, P.

    2011-08-01

    The design of multicomponent alloys used in different applications based on specific thermo-physical properties determined experimentally or predicted from theoretical calculations is of major importance in many engineering applications. A procedure based on Monte Carlo simulations (MCS) and the thermodynamic integration (TI) method to improve the quality of the predicted thermodynamic properties calculated from classical thermodynamic calculations is presented in this study. The Gibbs energy function of the liquid phase of the Cu-Zr system at 1800 K has been determined based on this approach. The internal structure of Cu-Zr melts and amorphous alloys at different temperatures, as well as other physical properties were also obtained from MCS in which the phase trajectory was modeled by the modified embedded atom model formalism. A rigorous comparison between available experimental data and simulated thermo-physical properties obtained from our MCS is presented in this work. The modified quasichemical model in the pair approximation was parameterized using the internal structure data obtained from our MCS and the precise Gibbs energy function calculated at 1800 K from the TI method. The predicted activity of copper in Cu-Zr melts at 1499 K obtained from our thermodynamic optimization was corroborated by experimental data found in the literature. The validity of the amplitude of the entropy of mixing obtained from the in silico procedure presented in this work was analyzed based on the thermodynamic description of hard sphere mixtures.

  18. Characterization and Monte Carlo simulation of single ion Geiger mode avalanche diodes integrated with a quantum dot nanostructure

    NASA Astrophysics Data System (ADS)

    Sharma, Peter; Abraham, J. B. S.; Ten Eyck, G.; Childs, K. D.; Bielejec, E.; Carroll, M. S.

    Detection of single ion implantation within a nanostructure is necessary for the high yield fabrication of implanted donor-based quantum computing architectures. Single ion Geiger mode avalanche (SIGMA) diodes with a laterally integrated nanostructure capable of forming a quantum dot were fabricated and characterized using photon pulses. The detection efficiency of this design was measured as a function of wavelength, lateral position, and for varying delay times between the photon pulse and the overbias detection window. Monte Carlo simulations based only on the random diffusion of photo-generated carriers and the geometrical placement of the avalanche region agrees qualitatively with device characterization. Based on these results, SIGMA detection efficiency appears to be determined solely by the diffusion of photo-generated electron-hole pairs into a buried avalanche region. Device performance is then highly dependent on the uniformity of the underlying silicon substrate and the proximity of photo-generated carriers to the silicon-silicon dioxide interface, which are the most important limiting factors for reaching the single ion detection limit with SIGMA detectors. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  19. Path integral Monte Carlo simulations of H2 adsorbed to lithium-doped benzene: A model for hydrogen storage materials

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Kolmann, Stephen J.; D'Arcy, Jordan H.; Crittenden, Deborah L.; Jordan, Meredith J. T.

    2015-11-01

    Finite temperature quantum and anharmonic effects are studied in H2-Li+-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li+-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li+-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol-1, respectively.

  20. Path integral Monte Carlo simulations of H2 adsorbed to lithium-doped benzene: A model for hydrogen storage materials.

    PubMed

    Lindoy, Lachlan P; Kolmann, Stephen J; D'Arcy, Jordan H; Crittenden, Deborah L; Jordan, Meredith J T

    2015-11-21

    Finite temperature quantum and anharmonic effects are studied in H2-Li(+)-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li(+)-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling-coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li(+)-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol(-1), respectively.

  1. Path integral Monte Carlo simulations of H2 adsorbed to lithium-doped benzene: A model for hydrogen storage materials.

    PubMed

    Lindoy, Lachlan P; Kolmann, Stephen J; D'Arcy, Jordan H; Crittenden, Deborah L; Jordan, Meredith J T

    2015-11-21

    Finite temperature quantum and anharmonic effects are studied in H2-Li(+)-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li(+)-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling-coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li(+)-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol(-1), respectively. PMID:26590532

  2. Monte Carlo neutrino oscillations

    SciTech Connect

    Kneller, James P.; McLaughlin, Gail C.

    2006-03-01

    We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wave function. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.

  3. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  4. Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield

    SciTech Connect

    Cramer, S.N.; Roussin, R.W.

    1981-11-01

    A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.

  5. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663

  6. A functional–structural kiwifruit vine model integrating architecture, carbon dynamics and effects of the environment

    PubMed Central

    Cieslak, Mikolaj; Seleznyova, Alla N.; Hanan, Jim

    2011-01-01

    Background and Aims Functional–structural modelling can be used to increase our understanding of how different aspects of plant structure and function interact, identify knowledge gaps and guide priorities for future experimentation. By integrating existing knowledge of the different aspects of the kiwifruit (Actinidia deliciosa) vine's architecture and physiology, our aim is to develop conceptual and mathematical hypotheses on several of the vine's features: (a) plasticity of the vine's architecture; (b) effects of organ position within the canopy on its size; (c) effects of environment and horticultural management on shoot growth, light distribution and organ size; and (d) role of carbon reserves in early shoot growth. Methods Using the L-system modelling platform, a functional–structural plant model of a kiwifruit vine was created that integrates architectural development, mechanistic modelling of carbon transport and allocation, and environmental and management effects on vine and fruit growth. The branching pattern was captured at the individual shoot level by modelling axillary shoot development using a discrete-time Markov chain. An existing carbon transport resistance model was extended to account for several source/sink components of individual plant elements. A quasi-Monte Carlo path-tracing algorithm was used to estimate the absorbed irradiance of each leaf. Key Results Several simulations were performed to illustrate the model's potential to reproduce the major features of the vine's behaviour. The model simulated vine growth responses that were qualitatively similar to those observed in experiments, including the plastic response of shoot growth to local carbon supply, the branching patterns of two Actinidia species, the effect of carbon limitation and topological distance on fruit size and the complex behaviour of sink competition for carbon. Conclusions The model is able to reproduce differences in vine and fruit growth arising from various

  7. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-06-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b4 of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4 , our b4 agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.

  8. Path integral Monte Carlo determination of the fourth-order virial coefficient for unitary two-component Fermi gas with zero-range interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-05-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astro physics. This work determines the fourth-order virial coefficient b4 of such a strongly-interacting Fermi gas using a customized ab inito path integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4, our b4 agrees with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly anti-symmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions. We gratefully acknowledge support by the NSF.

  9. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions.

    PubMed

    Yan, Yangqian; Blume, D

    2016-06-10

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b_{4} of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b_{4}, our b_{4} agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions. PMID:27341213

  10. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2004-06-01

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  11. Integration and evaluation of automated Monte Carlo simulations in the clinical practice of scanned proton and carbon ion beam therapy

    NASA Astrophysics Data System (ADS)

    Bauer, J.; Sommerer, F.; Mairani, A.; Unholtz, D.; Farook, R.; Handrack, J.; Frey, K.; Marcelos, T.; Tessonnier, T.; Ecker, S.; Ackermann, B.; Ellerbrock, M.; Debus, J.; Parodi, K.

    2014-08-01

    Monte Carlo (MC) simulations of beam interaction and transport in matter are increasingly considered as essential tools to support several aspects of radiation therapy. Despite the vast application of MC to photon therapy and scattered proton therapy, clinical experience in scanned ion beam therapy is still scarce. This is especially the case for ions heavier than protons, which pose additional issues like nuclear fragmentation and varying biological effectiveness. In this work, we present the evaluation of a dedicated framework which has been developed at the Heidelberg Ion Beam Therapy Center to provide automated FLUKA MC simulations of clinical patient treatments with scanned proton and carbon ion beams. Investigations on the number of transported primaries and the dimension of the geometry and scoring grids have been performed for a representative class of patient cases in order to provide recommendations on the simulation settings, showing that recommendations derived from the experience in proton therapy cannot be directly translated to the case of carbon ion beams. The MC results with the optimized settings have been compared to the calculations of the analytical treatment planning system (TPS), showing that regardless of the consistency of the two systems (in terms of beam model in water and range calculation in different materials) relevant differences can be found in dosimetric quantities and range, especially in the case of heterogeneous and deep seated treatment sites depending on the ion beam species and energies, homogeneity of the traversed tissue and size of the treated volume. The analysis of typical TPS speed-up approximations highlighted effects which deserve accurate treatment, in contrast to adequate beam model simplifications for scanned ion beam therapy. In terms of biological dose calculations, the investigation of the mixed field components in realistic anatomical situations confirmed the findings of previous groups so far reported only in

  12. A Monte Carlo Approach to Modeling the Breakup of the Space Launch System EM-1 Core Stage with an Integrated Blast and Fragment Catalogue

    NASA Technical Reports Server (NTRS)

    Richardson, Erin; Hays, M. J.; Blackwood, J. M.; Skinner, T.

    2014-01-01

    The Liquid Propellant Fragment Overpressure Acceleration Model (L-FOAM) is a tool developed by Bangham Engineering Incorporated (BEi) that produces a representative debris cloud from an exploding liquid-propellant launch vehicle. Here it is applied to the Core Stage (CS) of the National Aeronautics and Space Administration (NASA) Space Launch System (SLS launch vehicle). A combination of Probability Density Functions (PDF) based on empirical data from rocket accidents and applicable tests, as well as SLS specific geometry are combined in a MATLAB script to create unique fragment catalogues each time L-FOAM is run-tailored for a Monte Carlo approach for risk analysis. By accelerating the debris catalogue with the BEi blast model for liquid hydrogen / liquid oxygen explosions, the result is a fully integrated code that models the destruction of the CS at a given point in its trajectory and generates hundreds of individual fragment catalogues with initial imparted velocities. The BEi blast model provides the blast size (radius) and strength (overpressure) as probabilities based on empirical data and anchored with analytical work. The coupling of the L-FOAM catalogue with the BEi blast model is validated with a simulation of the Project PYRO S-IV destruct test. When running a Monte Carlo simulation, L-FOAM can accelerate all catalogues with the same blast (mean blast, 2 s blast, etc.), or vary the blast size and strength based on their respective probabilities. L-FOAM then propagates these fragments until impact with the earth. Results from L-FOAM include a description of each fragment (dimensions, weight, ballistic coefficient, type and initial location on the rocket), imparted velocity from the blast, and impact data depending on user desired application. LFOAM application is for both near-field (fragment impact to escaping crew capsule) and far-field (fragment ground impact footprint) safety considerations. The user is thus able to use statistics from a Monte Carlo

  13. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  14. General polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method applied to small atoms, ions, and molecules at finite temperatures

    NASA Astrophysics Data System (ADS)

    Tiihonen, Juha; Kylänpää, Ilkka; Rantala, Tapio T.

    2016-09-01

    The nonlinear optical properties of matter have a broad relevance and many methods have been invented to compute them from first principles. However, the effects of electronic correlation, finite temperature, and breakdown of the Born-Oppenheimer approximation have turned out to be challenging and tedious to model. Here we propose a straightforward approach and derive general field-free polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method. The estimators are applied to small atoms, ions, and molecules with one or two electrons. With the adiabatic, i.e., Born-Oppenheimer, approximation we obtain accurate tensorial ground state polarizabilities, while the nonadiabatic simulation adds in considerable rovibrational effects and thermal coupling. In both cases, the 0 K, or ground-state, limit is in excellent agreement with the literature. Furthermore, we report here the internal dipole moment of PsH molecule, the temperature dependence of the polarizabilities of H-, and the average dipole polarizabilities and the ground-state hyperpolarizabilities of HeH+ and H 3 + .

  15. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  16. Optical properties measurement of laser coagulated tissues with double integrating sphere and inverse Monte Carlo technique in the wavelength range from 350 to 2100 nm

    NASA Astrophysics Data System (ADS)

    Honda, Norihiro; Nanjo, Takuya; Ishii, Katsunori; Awazu, Kunio

    2012-03-01

    In laser medicine, the accurate knowledge about the optical properties (absorption coefficient; μa, scattering coefficient; μs, anisotropy factor; g) of laser irradiated tissues is important for the prediction of light propagation in tissues, since the efficacy of laser treatment depends on the photon propagation within the irradiated tissues. Thus, it is likely that the optical properties of tissues at near-ultraviolet, visible and near-infrared wavelengths will be more important due to more biomedical applications of lasers will be developed. For improvement of the laser induced thermotherapy, the optical property change during laser treatment should be considered in the wide wavelength range. For estimation of the optical properties of the biological tissues, the optical properties measurement system with a double integrating sphere setup and an inverse Monte Carlo technique was developed. The optical properties of chicken muscle tissue were measured in the native state and after laser coagulation using the optical properties measurement system in the wavelength range from 350 to 2100 nm. A CO2 laser was used for laser coagulation. After laser coagulation, the reduced scattering coefficient of the tissue increased. And, the optical penetration depth decreased. For improvement of the treatment depth during laser coagulation, a quantitative procedure using the treated tissue optical properties for determination of the irradiation power density following light penetration decrease might be important in clinic.

  17. A stochastic Monte Carlo approach to modelling real star cluster evolution - III. Direct integration of three- and four-body interactions

    NASA Astrophysics Data System (ADS)

    Giersz, M.; Spurzem, R.

    2003-08-01

    Spherically symmetric equal-mass star clusters containing a large number of primordial binaries are studied using a hybrid method, consisting of a gas dynamical model for single stars and a Monte Carlo treatment for relaxation of binaries and the setup of close resonant and fly-by encounters of single stars with binaries and binaries with each other (three- and four-body encounters). What differs from our previous work is that each encounter is being integrated using a highly accurate direct few-body integrator which uses regularized variables. Hence we can study the systematic evolution of individual binary orbital parameters (eccentricity, semimajor axis) and differential and total cross-sections for hardening, dissolution or merging of binaries (minimum distance) from a sampling of several tens of thousands of scattering events as they occur in real cluster evolution, including mass segregation of binaries, gravothermal collapse and re-expansion, a binary burning phase and ultimately gravothermal oscillations. For the first time we are able to present empirical cross-sections for eccentricity variation of binaries in close three- and four-body encounters. It is found that a large fraction of three- and four-body encounters result in merging. Eccentricities are generally increased in strong three- and four-body encounters and there is a characteristic scaling law ~ exp (4efin) of the differential cross-section for eccentricity changes, where efin is the final eccentricity of the binary, or harder binary for four-body encounters. Despite these findings the overall eccentricity distribution remains thermal for all binding energies of binaries, which is understood from the dominant influence of resonant encounters. Previous cross-sections obtained by Spitzer and Gao for strong encounters can be reproduced, while for weak encounters non-standard processes such as the formation of hierarchical triples occur.

  18. Monte Carlo Benchmark

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  19. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  20. Path integral Monte Carlo simulations of H{sub 2} adsorbed to lithium-doped benzene: A model for hydrogen storage materials

    SciTech Connect

    Lindoy, Lachlan P.; Kolmann, Stephen J.; D’Arcy, Jordan H.; Jordan, Meredith J. T.; Crittenden, Deborah L.

    2015-11-21

    Finite temperature quantum and anharmonic effects are studied in H{sub 2}–Li{sup +}-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H{sub 2}. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H{sub 2} molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔU{sub ads}, and enthalpy, ΔH{sub ads}, for H{sub 2} adsorption onto Li{sup +}-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H{sub 2}–Li{sup +}-benzene are the “helicopter” and “ferris wheel” H{sub 2} rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔU{sub ads} and ΔH{sub ads} are −13.3 ± 0.1 and −14.5 ± 0.1 kJ mol{sup −1}, respectively.

  1. Introducing Quasirandomness to Computer Science

    NASA Astrophysics Data System (ADS)

    Doerr, Benjamin

    The paradigm of quasirandomness led to dramatic progress in different areas of mathematics, with the invention of quasi-Monte Carlo methods in numerical integration probably being the best known example. In the last two decades, discrete mathematics heavily used quasirandom ideas, leading, e.g., to notions like quasirandom graphs.

  2. Effectiveness of the statistical potential in the description of fermions in a worm-algorithm path-integral Monte Carlo simulation of 3He atoms placed on a 4He layer adsorbed on graphite.

    PubMed

    Ghassib, Humam B; Sakhel, Asaad R; Obeidat, Omar; Al-Oqali, Amer; Sakhel, Roger R

    2012-01-01

    We demonstrate the effectiveness of a statistical potential (SP) in the description of fermions in a worm-algorithm path-integral Monte Carlo simulation of a few 3He atoms floating on a 4He layer adsorbed on graphite. The SP in this work yields successful results, as manifested by the clusterization of 3He, and by the observation that the 3He atoms float on the surface of 4He. We display the positions of the particles in 3D coordinate space, which reveal clusterization of the 3He component. The correlation functions are also presented, which give further evidence for the clusterization.

  3. Monte Carlo Example Programs

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  4. Modelling personal exposure to particulate air pollution: an assessment of time-integrated activity modelling, Monte Carlo simulation & artificial neural network approaches.

    PubMed

    McCreddin, A; Alam, M S; McNabola, A

    2015-01-01

    An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies.

  5. PREFACE: Polycrystal Modelling with Experimental Integration: A Symposium Honoring Carlos Tomé (San Diego, CA, USA, February 27-March 3 2011) Polycrystal Modelling with Experimental Integration: A Symposium Honoring Carlos Tomé (San Diego, CA, USA, February 27-March 3 2011)

    NASA Astrophysics Data System (ADS)

    Lebensohn, Ricardo A.

    2012-03-01

    This special issue contains selected contributions from invited speakers to the 'Polycrystal Modelling with Experimental Integration: A Symposium Honoring Carlos Tomé', held as part of the 2011 TMS Annual Meeting and Exhibition, that took place on February 27-March 3, 2011 in San Diego, CA, USA. This symposium honored the remarkable contributions of Dr Carlos N Tomé to the field of mechanical behavior of polycrystalline materials, on the occasion of his 60th birthday. Throughout his career, Dr Tomé has pioneered the theoretical and numerical development of models of polycrystal mechanical behavior, with emphasis on the role played by texture and microstructure on the anisotropic behavior of engineering materials. His many contributions have been critical in establishing a strong connection between models and experiments, and in bridging different scales in the pursuit of robust multiscale models with experimental integration. Among his achievements, the numerical codes that Dr Tomé and co-workers have developed are extensively used in the materials science and engineering community as predictive tools for parameter identification, interpretation of experiments, and multiscale calculations in academia, national laboratories and industry. The symposium brought together materials scientists and engineers to address current theoretical, computational and experimental issues related to microstructure-property relationships in polycrystalline materials deforming in different regimes, including the effects of single crystal anisotropy, texture and microstructure evolution. Synergetic studies, involving different crystal plasticity-based models, including multiscale implementations of the latter, and measurements of global and local textures, internal strains, dislocation structures, twinning, phase distribution, etc, were discussed in more than 90 presentations. The papers in this issue are representative of the different length-scales, materials, and experimental and

  6. Determination of Component Contents of Blend Oil Based on Characteristics Peak Value Integration.

    PubMed

    Xu, Jing; Hou, Pei-guo; Wang, Yu-tian; Pan, Zhao

    2016-01-01

    Edible blend oil market is confused at present. It has some problems such as confusing concepts, randomly named, shoddy and especially the fuzzy standard of compositions and ratios in blend oil. The national standard fails to come on time after eight years. The basic reason is the lack of qualitative and quantitative detection of vegetable oils in blend oil. Edible blend oil is mixed by different vegetable oils according to a certain proportion. Its nutrition is rich. Blend oil is eaten frequently in daily life. Different vegetable oil contains a certain components. The mixed vegetable oil can make full use of their nutrients and make the nutrients more balanced in blend oil. It is conducive to people's health. It is an effectively way to monitor blend oil market by the accurate determination of single vegetable oil content in blend oil. The types of blend oil are known, so we only need for accurate determination of its content. Three dimensional fluorescence spectra are used for the contents in blend oil. A new method of data processing is proposed with calculation of characteristics peak value integration in chosen characteristic area based on Quasi-Monte Carlo method, combined with Neural network method to solve nonlinear equations to obtain single vegetable oil content in blend oil. Peanut oil, soybean oil and sunflower oil are used as research object to reconcile into edible blend oil, with single oil regarded whole, not considered each oil's components. Recovery rates of 10 configurations of edible harmonic oil is measured to verify the validity of the method of characteristics peak value integration. An effective method is provided to detect components content of complex mixture in high sensitivity. Accuracy of recovery rats is increased, compared the common method of solution of linear equations used to detect components content of mixture. It can be used in the testing of kinds and content of edible vegetable oil in blend oil for the food quality detection

  7. Determination of Component Contents of Blend Oil Based on Characteristics Peak Value Integration.

    PubMed

    Xu, Jing; Hou, Pei-guo; Wang, Yu-tian; Pan, Zhao

    2016-01-01

    Edible blend oil market is confused at present. It has some problems such as confusing concepts, randomly named, shoddy and especially the fuzzy standard of compositions and ratios in blend oil. The national standard fails to come on time after eight years. The basic reason is the lack of qualitative and quantitative detection of vegetable oils in blend oil. Edible blend oil is mixed by different vegetable oils according to a certain proportion. Its nutrition is rich. Blend oil is eaten frequently in daily life. Different vegetable oil contains a certain components. The mixed vegetable oil can make full use of their nutrients and make the nutrients more balanced in blend oil. It is conducive to people's health. It is an effectively way to monitor blend oil market by the accurate determination of single vegetable oil content in blend oil. The types of blend oil are known, so we only need for accurate determination of its content. Three dimensional fluorescence spectra are used for the contents in blend oil. A new method of data processing is proposed with calculation of characteristics peak value integration in chosen characteristic area based on Quasi-Monte Carlo method, combined with Neural network method to solve nonlinear equations to obtain single vegetable oil content in blend oil. Peanut oil, soybean oil and sunflower oil are used as research object to reconcile into edible blend oil, with single oil regarded whole, not considered each oil's components. Recovery rates of 10 configurations of edible harmonic oil is measured to verify the validity of the method of characteristics peak value integration. An effective method is provided to detect components content of complex mixture in high sensitivity. Accuracy of recovery rats is increased, compared the common method of solution of linear equations used to detect components content of mixture. It can be used in the testing of kinds and content of edible vegetable oil in blend oil for the food quality detection

  8. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  9. Integrated Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) in the recent CMB data

    SciTech Connect

    Kim, Jaiseung

    2011-04-01

    We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained f{sub NL} and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the f{sub NL} estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of f{sub NL} by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.

  10. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  11. SAN CARLOS APACHE PAPERS.

    ERIC Educational Resources Information Center

    ROESSEL, ROBERT A., JR.

    THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…

  12. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    SciTech Connect

    Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  13. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  14. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  15. Symbolic implicit Monte Carlo

    SciTech Connect

    Brooks, E.D. III )

    1989-08-01

    We introduce a new implicit Monte Carlo technique for solving time dependent radiation transport problems involving spontaneous emission. In the usual implicit Monte Carlo procedure an effective scattering term in dictated by the requirement of self-consistency between the transport and implicitly differenced atomic populations equations. The effective scattering term, a source of inefficiency for optically thick problems, becomes an impasse for problems with gain where its sign is negative. In our new technique the effective scattering term does not occur and the excecution time for the Monte Carlo portion of the algorithm is independent of opacity. We compare the performance and accuracy of the new symbolic implicit Monte Carlo technique to the usual effective scattering technique for the time dependent description of a two-level system in slab geometry. We also examine the possibility of effectively exploiting multiprocessors on the algorithm, obtaining supercomputer performance using shared memory multiprocessors based on cheap commodity microprocessor technology. {copyright} 1989 Academic Press, Inc.

  16. Vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes.

  17. Coupled Electron-Ion Monte Carlo calculations of atomic hydrogen

    NASA Astrophysics Data System (ADS)

    Holzmann, Markus; Pierleoni, Carlo; Ceperley, David M.

    2005-07-01

    We present a new Monte Carlo method which couples Path Integral for finite temperature protons with Quantum Monte Carlo for ground state electrons, and we apply it to metallic hydrogen for pressures beyond molecular dissociation. This method fills the gap between high temperature electron-proton Path Integral and ground state Diffusion Monte Carlo methods. Our data exhibit more structure and higher melting temperatures of the proton crystal than Car-Parrinello Molecular Dynamics results using LDA. We further discuss the quantum motion of the protons and the zero temperature limit.

  18. Baseball Monte Carlo Style.

    ERIC Educational Resources Information Center

    Houser, Larry L.

    1981-01-01

    Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)

  19. Carlos Monge Cassinelli: a portrait.

    PubMed

    León-Velarde, Fabiola; Richalet, Jean-Paul

    2007-01-01

    Carlos "Choclo" Monge Cassinelli, a pillar of scientific integrity and great friendship to high altitude researchers throughout the world passed away in 2006, and was honored by his many friends at colleagues at the 2007 International Hypoxia Symposium. Choclo had more than 600 publications to his name, in fields diverse from his medical specialty in renal disease, to the biology of animals adapting to the high altitudes of South America. Those of us who had the pleasure of working with Choclo will always remember the sparkle in his eye, the intelligent, probing questions, and his tremendous sense of humor. He was recognized as a world authority on high altitude dieases, with particular accolades for his work involving high altitude resident populations. This tribute and picture gallery pay tribute to Choclo, written by Fabiola Leon Velarde and Jean Paul Richalet.

  20. Quasi-Random Sequence Generators.

    1994-03-01

    Version 00 LPTAU generates quasi-random sequences. The sequences are uniformly distributed sets of L=2**30 points in the N-dimensional unit cube: I**N=[0,1]. The sequences are used as nodes for multidimensional integration, as searching points in global optimization, as trial points in multicriteria decision making, as quasi-random points for quasi Monte Carlo algorithms.

  1. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  2. Monte Carlo fluorescence microtomography

    NASA Astrophysics Data System (ADS)

    Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

    2011-07-01

    Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

  3. Parallel CARLOS-3D code development

    SciTech Connect

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.

  4. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  5. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  6. Isotropic Monte Carlo Grain Growth

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  7. Innovation Lecture Series - Carlos Dominguez

    NASA Video Gallery

    Carlos Dominguez is a Senior Vice President at Cisco Systems and a technology evangelist, speaking to and motivating audiences worldwide about how technology is changing how we communicate, collabo...

  8. Carlos Chagas: biographical sketch.

    PubMed

    Moncayo, Alvaro

    2010-01-01

    Carlos Chagas was born on 9 July 1878 in the farm "Bon Retiro" located close to the City of Oliveira in the interior of the State of Minas Gerais, Brazil. He started his medical studies in 1897 at the School of Medicine of Rio de Janeiro. In the late XIX century, the works by Louis Pasteur and Robert Koch induced a change in the medical paradigm with emphasis in experimental demonstrations of the causal link between microbes and disease. During the same years in Germany appeared the pathological concept of disease, linking organic lesions with symptoms. All these innovations were adopted by the reforms of the medical schools in Brazil and influenced the scientific formation of Chagas. Chagas completed his medical studies between 1897 and 1903 and his examinations during these years were always ranked with high grades. Oswaldo Cruz accepted Chagas as a doctoral candidate and directed his thesis on "Hematological studies of Malaria" which was received with honors by the examiners. In 1903 the director appointed Chagas as research assistant at the Institute. In those years, the Institute of Manguinhos, under the direction of Oswaldo Cruz, initiated a process of institutional growth and gathered a distinguished group of Brazilian and foreign scientists. In 1907, he was requested to investigate and control a malaria outbreak in Lassance, Minas Gerais. In this moment Chagas could not have imagined that this field research was the beginning of one of the most notable medical discoveries. Chagas was, at the age of 28, a Research Assistant at the Institute of Manguinhos and was studying a new flagellate parasite isolated from triatomine insects captured in the State of Minas Gerais. Chagas made his discoveries in this order: first the causal agent, then the vector and finally the human cases. These notable discoveries were carried out by Chagas in twenty months. At the age of 33 Chagas had completed his discoveries and published the scientific articles that gave him world

  9. Shifted-Contour Monte Carlo Method for Nuclear Structure

    SciTech Connect

    Stoitcheva, G.S.; Dean, D.J.

    2004-09-13

    We propose a new approach for alleviating the 'sign' problem in the nuclear shell model Monte Carlo method. The approach relies on modifying the integration contour of the Hubbard-Stratonovich transformation to pass through an imaginary stationary point in the auxiliary-field associated with the Hartree-Fock density.

  10. Quantum Monte Carlo simulation with a black hole

    NASA Astrophysics Data System (ADS)

    Benić, Sanjin; Yamamoto, Arata

    2016-05-01

    We perform quantum Monte Carlo simulations in the background of a classical black hole. The lattice discretized path integral is numerically calculated in the Schwarzschild metric and in its approximated metric. We study spontaneous symmetry breaking of a real scalar field theory. We observe inhomogeneous symmetry breaking induced by an inhomogeneous gravitational field.

  11. Angular biasing in implicit Monte-Carlo

    SciTech Connect

    Zimmerman, G.B.

    1994-10-20

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

  12. Discrete Diffusion Monte Carlo for grey Implicit Monte Carlo simulations.

    SciTech Connect

    Densmore, J. D.; Urbatsch, T. J.; Evans, T. M.; Buksas, M. W.

    2005-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a hybrid transport-diffusion method for Monte Carlo simulations in diffusive media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Thus, DDMC produces accurate solutions while increasing the efficiency of the Monte Carlo calculation. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for grey Implicit Monte Carlo calculations. First, we employ a diffusion equation that is discretized in space but is continuous time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. In addition, we treat particles incident on an optically thick region using the asymptotic diffusion-limit boundary condition. This interface technique can produce accurate solutions even if the incident particles are distributed anisotropically in angle. Finally, we develop a method for estimating radiation momentum deposition during the DDMC simulation. With a set of numerical examples, we demonstrate the accuracy and efficiency of our improved DDMC method.

  13. Application of biasing techniques to the contributon Monte Carlo method

    SciTech Connect

    Dubi, A.; Gerstl, S.A.W.

    1980-01-01

    Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

  14. Coupled Electron-Ion Monte Carlo Calculations of Dense Metallic Hydrogen

    NASA Astrophysics Data System (ADS)

    Pierleoni, Carlo; Ceperley, David M.; Holzmann, Markus

    2004-09-01

    We present an efficient new Monte Carlo method which couples path integrals for finite temperature protons with quantum Monte Carlo calculations for ground state electrons, and we apply it to metallic hydrogen for pressures beyond molecular dissociation. We report data for the equation of state for temperatures across the melting of the proton crystal. Our data exhibit more structure and higher melting temperatures of the proton crystal than do Car-Parrinello molecular dynamics results. This method fills the gap between high temperature electron-proton path integral and ground state diffusion Monte Carlo methods and should have wide applicability.

  15. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  16. Multilevel sequential Monte Carlo samplers

    DOE PAGESBeta

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  17. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  18. Monte Carlo Experiments: Design and Implementation.

    ERIC Educational Resources Information Center

    Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian

    2001-01-01

    Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)

  19. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  20. A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport

    SciTech Connect

    Wollaeger, Ryan T.; Densmore, Jeffery D.

    2012-06-19

    Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution.

  1. Monte Carlo Study of Real Time Dynamics on the Lattice

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Vartak, Sohan; Warrington, Neill C.

    2016-08-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  2. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  3. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  4. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  5. Better HMC integrators for dynamical simulations

    SciTech Connect

    M.A. Clark, Balint Joo, A.D. Kennedy, P.J. Silva

    2010-06-01

    We show how to improve the molecular dynamics step of Hybrid Monte Carlo, both by tuning the integrator using Poisson brackets measurements and by the use of force gradient integrators. We present results for moderate lattice sizes.

  6. Monte Carlo calculation of monitor unit for electron arc therapy

    SciTech Connect

    Chow, James C. L.; Jiang Runqing

    2010-04-15

    for electron arc therapy. Since Monte Carlo simulations can generate a precalculated database of ROF, SSD offset, and DF for the MU calculation, with a reduction in human effort and linac beam-on time, it is recommended that Monte Carlo simulations be partially or completely integrated into the commissioning of electron arc therapy.

  7. Monte Carlo modelling of TRIGA research reactor

    NASA Astrophysics Data System (ADS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  8. Monte Carlo simulation of a quantized universe.

    NASA Astrophysics Data System (ADS)

    Berger, Beverly K.

    1988-08-01

    A Monte Carlo simulation method which yields groundstate wave functions for multielectron atoms is applied to quantized cosmological models. In quantum mechanics, the propagator for the Schrödinger equation reduces to the absolute value squared of the groundstate wave function in the limit of infinite Euclidean time. The wave function of the universe as the solution to the Wheeler-DeWitt equation may be regarded as the zero energy mode of a Schrödinger equation in coordinate time. The simulation evaluates the path integral formulation of the propagator by constructing a large number of paths and computing their contribution to the path integral using the Metropolis algorithm to drive the paths toward a global minimum in the path energy. The result agrees with a solution to the Wheeler-DeWitt equation which has the characteristics of a nodeless groundstate wave function. Oscillatory behavior cannot be reproduced although the simulation results may be physically reasonable. The primary advantage of the simulations is that they may easily be extended to cosmologies with many degrees of freedom. Examples with one, two, and three degrees of freedom (d.f.) are presented.

  9. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  10. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  11. Present status of vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1987-01-01

    Monte Carlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since Monte Carlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized Monte Carlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure Monte Carlo codes for vectorization. Based on the successful efforts to date, it may be concluded that Monte Carlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing Monte Carlo, since the step from vector to multitasked vector is relatively straightforward.

  12. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  13. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  14. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  15. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  16. Uncertainty Propagation with Fast Monte Carlo Techniques

    NASA Astrophysics Data System (ADS)

    Rochman, D.; van der Marck, S. C.; Koning, A. J.; Sjöstrand, H.; Zwermann, W.

    2014-04-01

    Two new and faster Monte Carlo methods for the propagation of nuclear data uncertainties in Monte Carlo nuclear simulations are presented (the "Fast TMC" and "Fast GRS" methods). They are addressing the main drawback of the original Total Monte Carlo method (TMC), namely the necessary large time multiplication factor compared to a single calculation. With these new methods, Monte Carlo simulations can now be accompanied with uncertainty propagation (other than statistical), with small additional calculation time. The new methods are presented and compared with the TMC methods for criticality benchmarks.

  17. Hybrid Monte Carlo with non-uniform step size.

    PubMed

    Holzgräfe, Christian; Bhattacherjee, Arnab; Irbäck, Anders

    2014-01-28

    The Hybrid Monte Carlo method offers a rigorous and potentially efficient approach to the simulation of dense systems, by combining numerical integration of Newton's equations of motion with a Metropolis accept-or-reject step. The Metropolis step corrects for sampling errors caused by the discretization of the equations of motion. The integration is usually performed using a uniform step size. Here, we present simulations of the Lennard-Jones system showing that the use of smaller time steps in the tails of each integration trajectory can reduce errors in energy. The acceptance rate is 10-15 percentage points higher in these runs, compared to simulations with the same trajectory length and the same number of integration steps but a uniform step size. We observe similar effects for the harmonic oscillator and a coarse-grained peptide model, indicating generality of the approach.

  18. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-09-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

  19. Monte Carlo Simulations for Radiobiology

    NASA Astrophysics Data System (ADS)

    Ackerman, Nicole; Bazalova, Magdalena; Chang, Kevin; Graves, Edward

    2012-02-01

    The relationship between tumor response and radiation is currently modeled as dose, quantified on the mm or cm scale through measurement or simulation. This does not take into account modern knowledge of cancer, including tissue heterogeneities and repair mechanisms. We perform Monte Carlo simulations utilizing Geant4 to model radiation treatment on a cellular scale. Biological measurements are correlated to simulated results, primarily the energy deposit in nuclear volumes. One application is modeling dose enhancement through the use of high-Z materials, such gold nanoparticles. The model matches in vitro data and predicts dose enhancement ratios for a variety of in vivo scenarios. This model shows promise for both treatment design and furthering our understanding of radiobiology.

  20. Recent developments in quantum Monte Carlo simulations with applications for cold gases.

    PubMed

    Pollet, Lode

    2012-09-01

    This is a review of recent developments in Monte Carlo methods in the field of ultracold gases. For bosonic atoms in an optical lattice we discuss path-integral Monte Carlo simulations with worm updates and show the excellent agreement with cold atom experiments. We also review recent progress in simulating bosonic systems with long-range interactions, disordered bosons, mixtures of bosons and spinful bosonic systems. For repulsive fermionic systems, determinantal methods at half filling are sign free, but in general no sign-free method exists. We review the developments in diagrammatic Monte Carlo for the Fermi polaron problem and the Hubbard model, and show the connection with dynamical mean-field theory. We end the review with diffusion Monte Carlo for the Stoner problem in cold gases.

  1. Realistic Monte Carlo Simulation of PEN Apparatus

    NASA Astrophysics Data System (ADS)

    Glaser, Charles; PEN Collaboration

    2015-04-01

    The PEN collaboration undertook to measure the π+ -->e+νe(γ) branching ratio with a relative uncertainty of 5 ×10-4 or less at the Paul Scherrer Institute. This observable is highly susceptible to small non V - A contributions, i.e, non-Standard Model physics. The detector system included a beam counter, mini TPC for beam tracking, an active degrader and stopping target, MWPCs and a plastic scintillator hodoscope for particle tracking and identification, and a spherical CsI EM calorimeter. GEANT 4 Monte Carlo simulation is integral to the analysis as it is used to generate fully realistic events for all pion and muon decay channels. The simulated events are constructed so as to match the pion beam profiles, divergence, and momentum distribution. Ensuring the placement of individual detector components at the sub-millimeter level and proper construction of active target waveforms and associated noise, enables us to more fully understand temporal and geometrical acceptances as well as energy, time, and positional resolutions and calibrations in the detector system. This ultimately leads to reliable discrimination of background events, thereby improving cut based or multivariate branching ratio extraction. Work supported by NSF Grants PHY-0970013, 1307328, and others.

  2. A detection method of vegetable oils in edible blended oil based on three-dimensional fluorescence spectroscopy technique.

    PubMed

    Xu, Jing; Liu, Xiao-Fei; Wang, Yu-Tian

    2016-12-01

    Edible blended vegetable oils are made from two or more refined oils. Blended oils can provide a wider range of essential fatty acids than single vegetable oils, which helps support good nutrition. Nutritional components in blended oils are related to the type and content of vegetable oils used, and a new, more accurate, method is proposed to identify and quantify the vegetable oils present using cluster analysis and a Quasi-Monte Carlo integral. Three-dimensional fluorescence spectra were obtained at 250-400nm (excitation) and 260-750nm (emission). Mixtures of sunflower, soybean and peanut oils were used as typical examples to validate the effectiveness of the method.

  3. Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model

    SciTech Connect

    Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.

    2011-10-01

    We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.

  4. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  5. Experimental validation of plutonium ageing by Monte Carlo correlated sampling

    SciTech Connect

    Litaize, O.; Bernard, D.; Santamarina, A.

    2006-07-01

    Integral measurements of Plutonium Ageing were performed in two homogeneous MOX cores (MISTRAL2 and MISTRALS) of the French MISTRAL Programme between 1996 and year 2000. The analysis of the MISTRAL2 experiment with JEF-2.2 nuclear data library high-lightened an underestimation of {sup 241}Am capture cross section. The next experiment (MISTRALS) did not conclude in the same way. This paper present a new analysis performed with the recent JEFF-3.1 library and a Monte Carlo perturbation method (correlated sampling) available in the French TRIPOLI4 code. (authors)

  6. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  7. "Full Model" Nuclear Data and Covariance Evaluation Process Using TALYS, Total Monte Carlo and Backward-forward Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bauge, E.

    2015-01-01

    The "Full model" evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the "Full model" evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.

  8. Monte Carlo Ion Transport Analysis Code.

    2009-04-15

    Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.

  9. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  10. Monte carlo sampling of fission multiplicity.

    SciTech Connect

    Hendricks, J. S.

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  11. Monte Carlo simulation of aorta autofluorescence

    NASA Astrophysics Data System (ADS)

    Kuznetsova, A. A.; Pushkareva, A. E.

    2016-08-01

    Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.

  12. Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications

    SciTech Connect

    Rising, Michael Evan

    2015-11-03

    These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).

  13. CMS Monte Carlo production in the WLCG computing grid

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Kreuzer, P.; Mohapatra, A.; Filippis, N. D.; Weirdt, S. D.; Hof, C.; Wakefield, S.; Guan, W.; Khomitch, A.; Fanfani, A.; Evans, D.; Flossdorf, A.; Maes, J.; Mulders, P. v.; Villella, I.; Pompili, A.; My, S.; Abbrescia, M.; Maggi, G.; Donvito, G.; Caballero, J.; Sanches, J. A.; Kavka, C.; Lingen, F. v.; Bacchi, W.; Codispoti, G.; Elmer, P.; Eulisse, G.; Lazaridis, C.; Kalini, S.; Sarkar, S.; Hammad, G.

    2008-07-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day.

  14. The ATLAS Fast Monte Carlo Production Chain Project

    NASA Astrophysics Data System (ADS)

    Jansky, Roland

    2015-12-01

    During the last years ATLAS has successfully deployed a new integrated simulation framework (ISF) which allows a flexible mixture of full and fast detector simulation techniques within the processing of one event. The thereby achieved possible speed-up in detector simulation of up to a factor 100 makes subsequent digitization and reconstruction the dominant contributions to the Monte Carlo (MC) production CPU cost. The slowest components of both digitization and reconstruction are inside the Inner Detector due to the complex signal modeling needed in the emulation of the detector readout and in reconstruction due to the combinatorial nature of the problem to solve, respectively. Alternative fast approaches have been developed for these components: for the silicon based detectors a simpler geometrical clustering approach has been deployed replacing the charge drift emulation in the standard digitization modules, which achieves a very high accuracy in describing the standard output. For the Inner Detector track reconstruction, a Monte Carlo generator information based trajectory building has been deployed with the aim of bypassing the CPU intensive pattern recognition. Together with the ISF all components have been integrated into a new fast MC production chain, aiming to produce fast MC simulated data with sufficient agreement with fully simulated and reconstructed data at a processing time of seconds per event, compared to several minutes for full simulation.

  15. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller

  16. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

    SciTech Connect

    McKinley, M S; Brooks III, E D; Daffin, F

    2004-12-13

    Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

  17. Monte Carlo simulation with fixed steplength for diffusion processes in nonhomogeneous media

    NASA Astrophysics Data System (ADS)

    Ruiz Barlett, V.; Hoyuelos, M.; Mártin, H. O.

    2013-04-01

    Monte Carlo simulation is one of the most important tools in the study of diffusion processes. For constant diffusion coefficients, an appropriate Gaussian distribution of particle's steplengths can generate exact results, when compared with integration of the diffusion equation. It is important to notice that the same method is completely erroneous when applied to non-homogeneous diffusion coefficients. A simple alternative, jumping at fixed steplengths with appropriate transition probabilities, produces correct results. Here, a model for diffusion of calcium ions in the neuromuscular junction of the crayfish is used as a test to compare Monte Carlo simulation with fixed and Gaussian steplength.

  18. FREYA-a new Monte Carlo code for improved modeling of fission chains

    SciTech Connect

    Hagmann, C A; Randrup, J; Vogt, R L

    2012-06-12

    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  19. Monte Carlo Shielding Analysis Capabilities with MAVRIC

    SciTech Connect

    Peplow, Douglas E.

    2011-01-01

    Monte Carlo shielding analysis capabilities in SCALE 6 are centered on the CADIS methodology Consistent Adjoint Driven Importance Sampling. CADIS is used to create an importance map for space/energy weight windows as well as a biased source distribution. New to SCALE 6 are the Monaco functional module, a multi-group fixed-source Monte Carlo transport code, and the MAVRIC sequence (Monaco with Automated Variance Reduction Using Importance Calculations). MAVRIC uses the Denovo code (also new to SCALE 6) to compute coarse-mesh discrete ordinates solutions which are used by CADIS to form an importance map and biased source distribution for the Monaco Monte Carlo code. MAVRIC allows the user to optimize the Monaco calculation for a specify tally using the CADIS method with little extra input compared to a standard Monte Carlo calculation. When computing several tallies at once or a mesh tally over a large volume of space, an extension of the CADIS method called FW-CADIS can be used to help the Monte Carlo simulation spread particles over phase space to get more uniform relative uncertainties.

  20. Shell model the Monte Carlo way

    SciTech Connect

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  1. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  2. Monte Carlo electron/photon transport

    SciTech Connect

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs.

  3. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  4. Monte carlo simulations of organic photovoltaics.

    PubMed

    Groves, Chris; Greenham, Neil C

    2014-01-01

    Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.

  5. Fast quantum Monte Carlo on a GPU

    NASA Astrophysics Data System (ADS)

    Lutsyshyn, Y.

    2015-02-01

    We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.

  6. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  7. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  8. Geodesic Monte Carlo on Embedded Manifolds.

    PubMed

    Byrne, Simon; Girolami, Mark

    2013-12-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  9. Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.

    PubMed

    Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  10. Infinite variance in fermion quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  11. Application of Monte Carlo codes to neutron dosimetry

    SciTech Connect

    Prevo, C.T.

    1982-06-15

    In neutron dosimetry, calculations enable one to predict the response of a proposed dosimeter before effort is expended to design and fabricate the neutron instrument or dosimeter. The nature of these calculations requires the use of computer programs that implement mathematical models representing the transport of radiation through attenuating media. Numerical, and in some cases analytical, solutions of these models can be obtained by one of several calculational techniques. All of these techniques are either approximate solutions to the well-known Boltzmann equation or are based on kernels obtained from solutions to the equation. The Boltzmann equation is a precise mathematical description of neutron behavior in terms of position, energy, direction, and time. The solution of the transport equation represents the average value of the particle flux density. Integral forms of the transport equation are generally regarded as the formal basis for the Monte Carlo method, the results of which can in principle be made to approach the exact solution. This paper focuses on the Monte Carlo technique.

  12. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study

    PubMed Central

    Zhang, Ying; Feng, Yuanming; Ming, Xin

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413

  13. Infinite variance in fermion quantum Monte Carlo calculations.

    PubMed

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling. PMID:27078480

  14. Monte Carlo simulations of lattice gauge theories

    SciTech Connect

    Rebbi, C

    1980-02-01

    Monte Carlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of Monte Carlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few Monte Carlo iterations and the internal energy is measured; mixed-phase runs, where several Monte Carlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)

  15. Advances in Monte Carlo computer simulation

    NASA Astrophysics Data System (ADS)

    Swendsen, Robert H.

    2011-03-01

    Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.

  16. Juan Carlos D'Olivo: A portrait

    NASA Astrophysics Data System (ADS)

    Aguilar-Arévalo, Alexis A.

    2013-06-01

    This report attempts to give a brief bibliographical sketch of the academic life of Juan Carlos D'Olivo, researcher and teacher at the Instituto de Ciencias Nucleares of UNAM, devoted to advancing the fields of High Energy Physics and Astroparticle Physics in Mexico and Latin America.

  17. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  18. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  19. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  20. Monte Carlo Simulation of Counting Experiments.

    ERIC Educational Resources Information Center

    Ogden, Philip M.

    A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…

  1. Residual entropy of ice III from Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kolafa, Jiří

    2016-03-01

    We calculated the residual entropy of ice III as a function of the occupation probabilities of hydrogen positions α and β assuming equal energies of all configurations. To do this, a discrete ice model with Bjerrum defect energy penalty and harmonic terms to constrain the occupation probabilities was simulated by the Metropolis Monte Carlo method for a range of temperatures and sizes followed by thermodynamic integration and extrapolation to N = ∞. Similarly as for other ices, the residual entropies are slightly higher than the mean-field (no-loop) approximation. However, the corrections caused by fluctuation of energies of ice samples calculated using molecular models of water are too large for accurate determination of the chemical potential and phase equilibria.

  2. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  3. Monte carlo calculations of light scattering from clouds.

    PubMed

    Plass, G N; Kattawar, G W

    1968-03-01

    The scattering of visible light by clouds is calculated from an efficient Monte Carlo code which follows the multiple scattered path of the photon. The single scattering function is obtained from the Mie theory by integration over a particle size distribution appropriate for cumulus clouds at 0.7-micro wavelength. The photons are followed through a sufficient number of collisions and reflections from the lower surface (which may have any desired albedo) until they make a negligible contribution to the intensity. Various variance reduction techniques are used to improve the statistics. The cloud albedo and the mean optical path of the transmitted and reflected photons are given as a function of the solar zenith angle, optical thickness, and surface albedo. The numerous small angle scatterings of the photon in the direction of the incident beam are followed accurately and produce a greater penetration into the cloud than is obtained with a more isotropic and less realistic phase function.

  4. Parameter estimation in deformable models using Markov chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Haynor, David R.; Sampson, Paul D.; Kim, Yongmin

    1997-04-01

    Deformable models have gained much popularity recently for many applications in medical imaging, such as image segmentation, image reconstruction, and image registration. Such models are very powerful because various kinds of information can be integrated together in an elegant statistical framework. Each such piece of information is typically associated with a user-defined parameter. The values of these parameters can have a significant effect on the results generated using these models. Despite the popularity of deformable models for various applications, not much attention has been paid to the estimation of these parameters. In this paper we describe systematic methods for the automatic estimation of these deformable model parameters. These methods are derived by posing the deformable models as a Bayesian inference problem. Our parameter estimation methods use Markov chain Monte Carlo methods for generating samples from highly complex probability distributions.

  5. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  6. Development of a Space Radiation Monte Carlo Computer Simulation

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence S.

    1997-01-01

    The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.

  7. Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX

    SciTech Connect

    Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim

    2006-07-01

    As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code

  8. Monte Carlo Study of Real Time Dynamics on the Lattice.

    PubMed

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C

    2016-08-19

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm. PMID:27588844

  9. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116

  10. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  11. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  12. Monte Carlo simulations on SIMD computer architectures

    SciTech Connect

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  13. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  14. Monte Carlo simulations of fluid vesicles.

    PubMed

    Sreeja, K K; Ipsen, John H; Sunil Kumar, P B

    2015-07-15

    Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations. PMID:26087479

  15. Monte Carlo simulations of fluid vesicles

    NASA Astrophysics Data System (ADS)

    Sreeja, K. K.; Ipsen, John H.; Kumar, P. B. Sunil

    2015-07-01

    Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations.

  16. Monte Carlo Methods in the Physical Sciences

    SciTech Connect

    Kalos, M H

    2007-06-06

    I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.

  17. Monte Carlo modeling of exospheric bodies - Mercury

    NASA Technical Reports Server (NTRS)

    Smith, G. R.; Broadfoot, A. L.; Wallace, L.; Shemansky, D. E.

    1978-01-01

    In order to study the interaction with the surface, a Monte Carlo program is developed to determine the distribution with altitude as well as the global distribution of density at the surface in a single operation. The analysis presented shows that the appropriate source distribution should be Maxwell-Boltzmann flux if the particles in the distribution are to be treated as components of flux. Monte Carlo calculations with a Maxwell-Boltzmann flux source are compared with Mariner 10 UV spectrometer data. Results indicate that the presently operating models are not capable of fitting the observed Mercury exosphere. It is suggested that an atmosphere calculated with a barometric source distribution is suitable for more realistic future exospheric models.

  18. Monte Carlo Particle Transport: Algorithm and Performance Overview

    SciTech Connect

    Gentile, N; Procassini, R; Scott, H

    2005-06-02

    Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.

  19. Monte Carlo simulation of Alaska wolf survival

    NASA Astrophysics Data System (ADS)

    Feingold, S. J.

    1996-02-01

    Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.

  20. Monte Carlo simulation of Touschek effect.

    SciTech Connect

    Xiao, A.; Borland, M.; Accelerator Systems Division

    2010-07-30

    We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.

  1. Quantum Monte Carlo with known sign structures

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan

    We investigate the merits of different Hubbard-Stratonovich transformations (including fermionic ones) for the description of interacting fermion systems, focusing on the single band Hubbard model as a model system. In particular we revisit an old proposal of Batrouni and Forcrand (PRB 48, 589 1993) for determinant quantum Monte Carlo simulations, in which the signs of all configurations is known beforehand. We will discuss different ways that this knowledge can be used to make more accurate predictions and simulations.

  2. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  3. Monte Carlo Generators for the LHC

    NASA Astrophysics Data System (ADS)

    Worek, M.

    2007-11-01

    The status of two Monte Carlo generators, HELAC-PHEGAS, a program for multi-jet processes and VBFNLO, a parton level program for vector boson fusion processes at NLO QCD, is briefly presented. The aim of these tools is the simulation of events within the Standard Model at current and future high energy experiments, in particular the LHC. Some results related to the production of multi-jet final states at the LHC are also shown.

  4. Monte Carlo small-sample perturbation calculations

    SciTech Connect

    Feldman, U.; Gelbard, E.; Blomquist, R.

    1983-01-01

    Two different Monte Carlo methods have been developed for benchmark computations of small-sample-worths in simplified geometries. The first is basically a standard Monte Carlo perturbation method in which neutrons are steered towards the sample by roulette and splitting. One finds, however, that two variance reduction methods are required to make this sort of perturbation calculation feasible. First, neutrons that have passed through the sample must be exempted from roulette. Second, neutrons must be forced to undergo scattering collisions in the sample. Even when such methods are invoked, however, it is still necessary to exaggerate the volume fraction of the sample by drastically reducing the size of the core. The benchmark calculations are then used to test more approximate methods, and not directly to analyze experiments. In the second method the flux at the surface of the sample is assumed to be known. Neutrons entering the sample are drawn from this known flux and tracking by Monte Carlo. The effect of the sample or the fission rate is then inferred from the histories of these neutrons. The characteristics of both of these methods are explored empirically.

  5. jTracker and Monte Carlo Comparison

    NASA Astrophysics Data System (ADS)

    Selensky, Lauren; SeaQuest/E906 Collaboration

    2015-10-01

    SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.

  6. Carlos Castillo-Chavez: a century ahead.

    PubMed

    Schatz, James

    2013-01-01

    When the opportunity to contribute a short essay about Dr. Carlos Castillo-Chavez presented itself in the context of this wonderful birthday celebration my immediate reaction was por supuesto que sí! Sixteen years ago, I travelled to Cornell University with my colleague at the National Security Agency (NSA) Barbara Deuink to meet Carlos and hear about his vision to expand the talent pool of mathematicians in our country. Our motivation was very simple. First of all, the Agency relies heavily on mathematicians to carry out its mission. If the U.S. mathematics community is not healthy, NSA is not healthy. Keeping our country safe requires a team of the sharpest minds in the nation to tackle amazing intellectual challenges on a daily basis. Second, the Agency cares deeply about diversity. Within the mathematical sciences, students with advanced degrees from the Chicano, Latino, Native American, and African-American communities are underrepresented. It was clear that addressing this issue would require visionary leadership and a long-term commitment. Carlos had the vision for a program that would provide promising undergraduates from minority communities with an opportunity to gain confidence and expertise through meaningful research experiences while sharing in the excitement of mathematical and scientific discovery. His commitment to the venture was unquestionable and that commitment has not waivered since the inception of the Mathematics and Theoretical Biology Institute (MTBI) in 1996.

  7. A configuration space Monte Carlo algorithm for solving the nuclear pairing problem

    NASA Astrophysics Data System (ADS)

    Lingle, Mark

    Nuclear pairing correlations using Quantum Monte Carlo are studied in this dissertation. We start by defining the nuclear pairing problem and discussing several historical methods developed to solve this problem, paying special attention to the applicability of such methods. A numerical example discussing pairing correlations in several calcium isotopes using the BCS and Exact Pairing solutions are presented. The ground state energies, correlation energies, and occupation numbers are compared to determine the applicability of each approach to realistic cases. Next we discuss some generalities related to the theory of Markov Chains and Quantum Monte Carlo in regards to nuclear structure. Finally we present our configuration space Monte Carlo algorithm starting from a discussion of a path integral approach by the authors. Some general features of the Pairing Hamiltonian that boost the effectiveness of a configuration space Monte Carlo approach are mentioned. The full details of our method are presented and special attention is paid to convergence and error control. We present a series of examples illustrating the effectiveness of our approach. These include situations with non-constant pairing strengths, limits when pairing correlations are weak, the computation of excited states, and problems when the relevant configuration space is large. We conclude with a chapter examining some of the effects of continuum states in 24O.

  8. Unified Monte Carlo: Evaluation, Uncertainty Quantification and Propagation of the Prompt Fission Neutron Spectrum

    NASA Astrophysics Data System (ADS)

    Rising, Michael E.; Talou, Patrick; Prinja, Anil K.; White, Morgan C.

    2014-06-01

    The unified Monte Carlo (UMC) method is used for the quantification of uncertainties associated with the evaluation of the prompt fission neutron spectrum (PFNS) for the n(0.5 MeV)+239Pu fission reaction and compared to the Kalman filter. Ultimately, the UMC and Kalman filter approaches lead to very similar evaluated PFNS while UMC is also capable of capturing the nonlinearities present in the Los Alamos (LA) model used to calculate the PFNS. Next, the unified Monte Carlo + total Monte Carlo (UMC+TMC) method is implemented to propagate uncertainties from the prior LA model parameters through the Flattop critical assemblies. Due to the fact that cross-experiment correlations are neglected in the present evaluation work, the UMC+TMC method predicts uncertainties in the integral quantities smaller by an order of magnitude or more compared to direct sampling from the posterior LA model parameters. Last, the UMC method is proposed for use as an evaluation tool that can be used with the new prompt fission Monte Carlo Hauser-Feshbach codes that are currently under development.

  9. Monte Carlo simulation of high-field transport equations

    SciTech Connect

    Abdolsalami, F.

    1989-01-01

    The author has studied the importance of the intracollisional field effect in the quantum transport equation derived by Khan, Davies and Wilkins (Phys. Rev. B36, 2578(1987)) via Monte Carlo simulations. This transport equation is identical to the integral form of the Boltzmann transport equation except that the scattering-in rates contain the auxiliary function of energy width {radical}{vert bar}{alpha}{vert bar} instead of the sharp delta function of the semiclassical theory where {alpha} = {pi}{h bar}{sup 2} e/m* E {center dot} q. Here, E is the electric field, q is the phonon wave vector of m* is the effective mass. The transport equation studied corresponds to a single parabolic band of infinite width and is valid in the field dominated limit, i.e. {radical}{vert bar}{alpha}{vert bar} {much gt} h/{tau}{sub sc}, where {tau}{sup {minus}1} is the electron scattering-out rate. In his simulation, he takes the single parabolic band to be the central valley of GaAs with transition to higher valleys shut off. Electrons are assumed to scatter with polar optic and acoustic phonons with the scattering parameters chosen to simulate GaAs. The loss of intervalley scattering mechanism for high electric fields is compensated for by increasing each of the four scattering rates relative to the real values in GaAs by a factor {gamma}. The transport equation studied contains the auxilliary function which is not positive definite. Therefore, it can not represent a probability of scattering in a Monte Carlo simulation. The question whether or not intracollisional field effect is important can be resolved by replacing the nonpositive definite auxilliary function by a test positive definite function of width {radical}{vert bar}{alpha}{vert bar} and comparing the results of the Monte Carlo simulation of this quantum transport equation with those of the Boltzmann transport equation. If the results are identical, the intracollisional field effect is not important.

  10. Thermodynamic properties of van der Waals fluids from Monte Carlo simulations and perturbative Monte Carlo theory

    NASA Astrophysics Data System (ADS)

    Díez, A.; Largo, J.; Solana, J. R.

    2006-08-01

    Computer simulations have been performed for fluids with van der Waals potential, that is, hard spheres with attractive inverse power tails, to determine the equation of state and the excess energy. On the other hand, the first- and second-order perturbative contributions to the energy and the zero- and first-order perturbative contributions to the compressibility factor have been determined too from Monte Carlo simulations performed on the reference hard-sphere system. The aim was to test the reliability of this "exact" perturbation theory. It has been found that the results obtained from the Monte Carlo perturbation theory for these two thermodynamic properties agree well with the direct Monte Carlo simulations. Moreover, it has been found that results from the Barker-Henderson [J. Chem. Phys. 47, 2856 (1967)] perturbation theory are in good agreement with those from the exact perturbation theory.

  11. Quantum Monte Carlo Algorithms for Diagrammatic Vibrational Structure Calculations

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew; Hirata, So

    2015-06-01

    Convergent hierarchies of theories for calculating many-body vibrational ground and excited-state wave functions, such as Møller-Plesset perturbation theory or coupled cluster theory, tend to rely on matrix-algebraic manipulations of large, high-dimensional arrays of anharmonic force constants, tasks which require large amounts of computer storage space and which are very difficult to implement in a parallel-scalable fashion. On the other hand, existing quantum Monte Carlo (QMC) methods for vibrational wave functions tend to lack robust techniques for obtaining excited-state energies, especially for large systems. By exploiting analytical identities for matrix elements of position operators in a harmonic oscillator basis, we have developed stochastic implementations of the size-extensive vibrational self-consistent field (MC-XVSCF) and size-extensive vibrational Møller-Plesset second-order perturbation (MC-XVMP2) theories which do not require storing the potential energy surface (PES). The programmable equations of MC-XVSCF and MC-XVMP2 take the form of a small number of high-dimensional integrals evaluated using Metropolis Monte Carlo techniques. The associated integrands require independent evaluations of only the value, not the derivatives, of the PES at many points, a task which is trivial to parallelize. However, unlike existing vibrational QMC methods, MC-XVSCF and MC-XVMP2 can calculate anharmonic frequencies directly, rather than as a small difference between two noisy total energies, and do not require user-selected coordinates or nodal surfaces. MC-XVSCF and MC-XVMP2 can also directly sample the PES in a given approximation without analytical or grid-based approximations, enabling us to quantify the errors induced by such approximations.

  12. Monte Carlo simulations for generic granite repository studies

    SciTech Connect

    Chu, Shaoping; Lee, Joon H; Wang, Yifeng

    2010-12-08

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

  13. Nesting Monte Carlo EM for high-dimensional item factor analysis

    PubMed Central

    An, Xinming; Bentler, Peter M.

    2012-01-01

    The item factor analysis model for investigating multidimensional latent spaces has proved to be useful. Parameter estimation in this model requires computationally demanding high-dimensional integrations. While several approaches to approximate such integrations have been proposed, they suffer various computational difficulties. This paper proposes a Nesting Monte Carlo Expectation-Maximization (MCEM) algorithm for item factor analysis with binary data. Simulation studies and a real data example suggest that the Nesting MCEM approach can significantly improve computational efficiency while also enjoying the good properties of stable convergence and easy implementation. PMID:23329857

  14. Status of Monte-Carlo Event Generators

    SciTech Connect

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  15. Quantum Monte Carlo for vibrating molecules

    SciTech Connect

    Brown, W.R. |

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

  16. Discovering correlated fermions using quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas K.; Ceperley, David M.

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.

  17. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  18. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  19. Monte Carlo radiation transport¶llelism

    SciTech Connect

    Cox, L. J.; Post, S. E.

    2002-01-01

    This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.

  20. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  1. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  2. Quantum Monte Carlo calculations for light nuclei.

    SciTech Connect

    Wiringa, R. B.

    1998-10-23

    Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 40 different (J{pi}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

  3. Monte Carlo simulation for the transport beamline

    SciTech Connect

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  4. Kinetic Monte Carlo simulations of proton conductivity

    NASA Astrophysics Data System (ADS)

    Masłowski, T.; Drzewiński, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Frączek, M.; Nordlund, K.; Kuronen, A.

    2014-07-01

    The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.

  5. Discovering correlated fermions using quantum Monte Carlo.

    PubMed

    Wagner, Lucas K; Ceperley, David M

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior. PMID:27518859

  6. “Full Model” Nuclear Data and Covariance Evaluation Process Using TALYS, Total Monte Carlo and Backward-forward Monte Carlo

    SciTech Connect

    Bauge, E.

    2015-01-15

    The “Full model” evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the “Full model” evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.

  7. Quantum Monte Carlo : not just for energy levels.

    SciTech Connect

    Nollett, K. M.; Physics

    2007-01-01

    Quantum Monte Carlo and realistic interactions can provide well-motivated vertices and overlaps for DWBA analyses of reactions. Given an interaction in vaccum, there are several computational approaches to nuclear systems, as you have been hearing: No-core shell model with Lee-Suzuki or Bloch-Horowitz for Hamiltonian Coupled clusters with G-matrix interaction Density functional theory, granted an energy functional derived from the interaction Quantum Monte Carlo - Variational Monte Carlo Green's function Monte Carlo. The last two work directly with a bare interaction and bare operators and describe the wave function without expanding in basis functions, so they have rather different sets of advantages and disadvantages from the others. Variational Monte Carlo (VMC) is built on a sophisticated Ansatz for the wave function, built on shell model like structure modified by operator correlations. Green's function Monte Carlo (GFMC) uses an operator method to project the true ground state out of a reasonable guess wave function.

  8. State-of-the-art Monte Carlo 1988

    SciTech Connect

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  9. Ab initio quantum Monte Carlo simulations of the uniform electron gas without fixed nodes: The unpolarized case

    NASA Astrophysics Data System (ADS)

    Dornheim, T.; Groth, S.; Schoof, T.; Hann, C.; Bonitz, M.

    2016-05-01

    In a recent publication [S. Groth et al., Phys. Rev. B 93, 085102 (2016), 10.1103/PhysRevB.93.085102], we have shown that the combination of two complementary quantum Monte Carlo approaches, namely configuration path integral Monte Carlo [T. Schoof et al., Phys. Rev. Lett. 115, 130402 (2015), 10.1103/PhysRevLett.115.130402] and permutation blocking path integral Monte Carlo [T. Dornheim et al., New J. Phys. 17, 073017 (2015), 10.1088/1367-2630/17/7/073017], allows for the accurate computation of thermodynamic properties of the spin-polarized uniform electron gas over a wide range of temperatures and densities without the fixed-node approximation. In the present work, we extend this concept to the unpolarized case, which requires nontrivial enhancements that we describe in detail. We compare our simulation results with recent restricted path integral Monte Carlo data [E. W. Brown et al., Phys. Rev. Lett. 110, 146405 (2013), 10.1103/PhysRevLett.110.146405] for different energy contributions and pair distribution functions and find, for the exchange correlation energy, overall better agreement than for the spin-polarized case, while the separate kinetic and potential contributions substantially deviate.

  10. 3. Photographic copy of map. San Carlos Project, Arizona. Irrigation ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Photographic copy of map. San Carlos Project, Arizona. Irrigation System. Department of the Interior. United States Indian Service. No date. Circa 1939. (Source: Henderson, Paul. U.S. Indian Irrigation Service. Supplemental Storage Reservoir, Gila River. November 10, 1939, RG 115, San Carlos Project, National Archives, Rocky Mountain Region, Denver, CO.) - San Carlos Irrigation Project, Lands North & South of Gila River, Coolidge, Pinal County, AZ

  11. MILAGRO IMPLICIT MONTE CARLO: NEW CAPABILITIES AND RESULTS

    SciTech Connect

    T. URBATSCH; T. EVANS

    2000-12-01

    Milagro is a stand-alone, radiation-only, code that performs nonlinear radiative transfer calculations using the Fleck and Cummings method of Implicit Monte Carlo (IMC). Milagro is an object-oriented, C++ code that utilizes classes in our group's (CCS-4) radiation transport library. Milagro and its underlying classes have been significantly upgraded since 1998, when results from Milagro were first presented. Most notably, the object-oriented design has been revised to allow for optimal stand-alone parallel efficiency and rapid integration of new classes. For example, the better design, coupled with stringent component testing, allowed for immediate integration of the full domain decomposition parallel scheme. (It is a simple philosophy: spend time on the design, and debug early and once.) Milagro's classes are templated on mesh type. Currently, it runs on an orthogonal, structured, not-necessarily-uniform, Cartesian mesh of up to three dimensions, an RZ-Wedge mesh, and soon a tetrahedral mesh. Milagro considers one-frequency, or ''grey,'' radiation with isotropic scattering, user-defined analytic opacities and equation-of-state, and various source types: surface, material, and radiation. Tallies produced by Milagro include energy and momentum deposition. In parallel, Milagro can run on a mesh that is fully replicated on all processors or on a mesh that is fully decomposed in the spatial domain. Milagro is reproducible, regardless of number of processors or parallel topology, and it now exactly conserves energy both globally and locally. Milagro has the capability for EnSight graphics and restarting. Finally, Milagro has been well verified with its use of Design-by-Contract{trademark}, component tests, and regression tests, and with its agreement to results of analytic test problems. By successfully running analytic and benchmark problems, Milagro serves to integrally verify all of its underlying classes, thus paving the way for other service packages based on these

  12. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  13. Four decades of implicit Monte Carlo

    DOE PAGESBeta

    Wollaber, Allan B.

    2016-04-25

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  14. Quantum Monte Carlo for atoms and molecules

    SciTech Connect

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

  15. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  16. Experimental Monte Carlo Quantum Process Certification

    NASA Astrophysics Data System (ADS)

    Steffen, Lars; Fedorov, Arkady; Baur, Matthias; Palmer da Silva, Marcus; Wallraff, Andreas

    2012-02-01

    Experimental implementations of quantum information processing have now reached a state, at which quantum process tomography starts to become impractical, since the number of experimental settings as well as the computational cost of the post processing required to extract the process matrix from the measurements scales exponentially with the number of qubits in the system. In order to determine the fidelity of an implemented process relative to the ideal one, a more practical approach called Monte Carlo quantum process certification was proposed in Ref. [1]. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup. Our system is realized with three superconducting transmon qubits coupled to a coplanar microwave resonator which is used for the joint-readout of the qubit states. We demonstrate an implementation of Monte Carlo quantum process certification and determine the fidelity of different two- and three-qubit gates such as cphase-, cnot-, 2cphase- and Toffoli-gates. The obtained results are compared with the values obtained from conventional process tomography and the errors of the obtained fidelities are determined. [4pt] [1] M. P. da Silva, O. Landon-Cardinal and D. Poulin, arXiv:1104.3835(2011)

  17. Quantum Monte Carlo methods for nuclear physics

    DOE PAGESBeta

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  18. Quantum Monte Carlo methods for nuclear physics

    DOE PAGESBeta

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  19. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  20. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  1. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  2. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  3. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.

    2014-10-01

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  4. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGESBeta

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  5. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  6. Monte Carlo methods in lattice gauge theories

    SciTech Connect

    Otto, S.W.

    1983-01-01

    The mass of the O/sup +/ glueball for SU(2) gauge theory in 4 dimensions is calculated. This computation was done on a prototype parallel processor and the implementation of gauge theories on this system is described in detail. Using an action of the purely Wilson form (tract of plaquette in the fundamental representation), results with high statistics are obtained. These results are not consistent with scaling according to the continuum renormalization group. Using actions containing higher representations of the group, a search is made for one which is closer to the continuum limit. The choice is based upon the phase structure of these extended theories and also upon the Migdal-Kadanoff approximation to the renormalizaiton group on the lattice. The mass of the O/sup +/ glueball for this improved action is obtained and the mass divided by the square root of the string tension is a constant as the lattice spacing is varied. The other topic studied is the inclusion of dynamical fermions into Monte Carlo calculations via the pseudo fermion technique. Monte Carlo results obtained with this method are compared with those from an exact algorithm based on Gauss-Seidel inversion. First applied were the methods to the Schwinger model and SU(3) theory.

  7. Monte Carlo techniques for analyzing deep-penetration problems

    SciTech Connect

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-02-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications.

  8. Monte Carlo modeling of spatial coherence: free-space diffraction.

    PubMed

    Fischer, David G; Prahl, Scott A; Duncan, Donald D

    2008-10-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  9. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  10. Multidimensional integration in a heterogeneous network environment

    NASA Astrophysics Data System (ADS)

    Veseli, Siniša

    1998-01-01

    We consider several issues related to the multidimensional integration using a network of heterogeneous computers. Based on these considerations, we develop a new general purpose scheme which can significantly reduce the time needed for evaluation of integrals with CPU intensive integrands. This scheme is a parallel version of the well-known adaptive Monte Carlo method (the VEGAS algorithm), and is incorporated into a new integration package which uses the standard set of message-passing routines in the PVM software system.

  11. Raga: Monte Carlo simulations of gravitational dynamics of non-spherical stellar systems

    NASA Astrophysics Data System (ADS)

    Vasiliev, Eugene

    2014-11-01

    Raga (Relaxation in Any Geometry) is a Monte Carlo simulation method for gravitational dynamics of non-spherical stellar systems. It is based on the SMILE software (ascl:1308.001) for orbit analysis. It can simulate stellar systems with a much smaller number of particles N than the number of stars in the actual system, represent an arbitrary non-spherical potential with a basis-set or spline spherical-harmonic expansion with the coefficients of expansion computed from particle trajectories, and compute particle trajectories independently and in parallel using a high-accuracy adaptive-timestep integrator. Raga can also model two-body relaxation by local (position-dependent) velocity diffusion coefficients (as in Spitzer's Monte Carlo formulation) and adjust the magnitude of relaxation to the actual number of stars in the target system, and model the effect of a central massive black hole.

  12. Electron density of states of Fe-based superconductors: Quantum trajectory Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Kashurnikov, V. A.; Krasavin, A. V.; Zhumagulov, Ya. V.

    2016-03-01

    The spectral and total electron densities of states in two-dimensional FeAs clusters, which simulate iron-based superconductors, have been calculated using the generalized quantum Monte Carlo algorithm within the full two-orbital model. Spectra have been reconstructed by solving the integral equation relating the Matsubara Green's function and spectral density by the method combining the gradient descent and Monte Carlo algorithms. The calculations have been performed for clusters with dimensions up to 10 × 10 FeAs cells. The profiles of the Fermi surface for the entire Brillouin zone have been presented in the quasiparticle approximation. Data for the total density of states near the Fermi level have been obtained. The effect of the interaction parameter, size of the cluster, and temperature on the spectrum of excitations has been studied.

  13. A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution

    NASA Astrophysics Data System (ADS)

    Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.

    2015-11-01

    In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].

  14. Liquid crystal free energy relaxation by a theoretically informed Monte Carlo method using a finite element quadrature approach.

    PubMed

    Armas-Pérez, Julio C; Hernández-Ortiz, Juan P; de Pablo, Juan J

    2015-12-28

    A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.

  15. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    SciTech Connect

    Grimes, Joshua; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  16. MC 93 - Proceedings of the International Conference on Monte Carlo Simulation in High Energy and Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi

    1994-01-01

    The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon

  17. Iterative Monte Carlo with bead-adapted sampling for complex-time correlation functions.

    PubMed

    Jadhao, Vikram; Makri, Nancy

    2010-03-14

    In a recent communication [V. Jadhao and N. Makri, J. Chem. Phys. 129, 161102 (2008)], we introduced an iterative Monte Carlo (IMC) path integral methodology for calculating complex-time correlation functions. This method constitutes a stepwise evaluation of the path integral on a grid selected by a Monte Carlo procedure, circumventing the exponential growth of statistical error with increasing propagation time, while realizing the advantageous scaling of importance sampling in the grid selection and integral evaluation. In the present paper, we present an improved formulation of IMC, which is based on a bead-adapted sampling procedure; thus leading to grid point distributions that closely resemble the absolute value of the integrand at each iteration. We show that the statistical error of IMC does not grow upon repeated iteration, in sharp contrast to the performance of the conventional path integral approach which leads to exponential increase in statistical uncertainty. Numerical results on systems with up to 13 degrees of freedom and propagation up to 30 times the "thermal" time variant Planck's over 2pibeta/2 illustrate these features.

  18. Iterative Monte Carlo with bead-adapted sampling for complex-time correlation functions

    NASA Astrophysics Data System (ADS)

    Jadhao, Vikram; Makri, Nancy

    2010-03-01

    In a recent communication [V. Jadhao and N. Makri, J. Chem. Phys. 129, 161102 (2008)], we introduced an iterative Monte Carlo (IMC) path integral methodology for calculating complex-time correlation functions. This method constitutes a stepwise evaluation of the path integral on a grid selected by a Monte Carlo procedure, circumventing the exponential growth of statistical error with increasing propagation time, while realizing the advantageous scaling of importance sampling in the grid selection and integral evaluation. In the present paper, we present an improved formulation of IMC, which is based on a bead-adapted sampling procedure; thus leading to grid point distributions that closely resemble the absolute value of the integrand at each iteration. We show that the statistical error of IMC does not grow upon repeated iteration, in sharp contrast to the performance of the conventional path integral approach which leads to exponential increase in statistical uncertainty. Numerical results on systems with up to 13 degrees of freedom and propagation up to 30 times the "thermal" time ℏβ /2 illustrate these features.

  19. Improving dynamical lattice QCD simulations through integrator tuning using Poisson brackets and a force-gradient integrator

    SciTech Connect

    Clark, Michael A.; Joo, Balint; Kennedy, Anthony D.; Silva, Paolo J.

    2011-10-01

    We show how the integrators used for the molecular dynamics step of the Hybrid Monte Carlo algorithm can be further improved. These integrators not only approximately conserve some Hamiltonian H but conserve exactly a nearby shadow Hamiltonian H~. This property allows for a new tuning method of the molecular dynamics integrator and also allows for a new class of integrators (force-gradient integrators) which is expected to reduce significantly the computational cost of future large-scale gauge field ensemble generation.

  20. Resist develop prediction by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun

    2002-07-01

    Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.

  1. Hybrid algorithms in quantum Monte Carlo

    SciTech Connect

    Esler, Kenneth P; Mcminis, Jeremy; Morales, Miguel A; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M

    2012-01-01

    With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.

  2. Monte Carlo Studies of Protein Aggregation

    NASA Astrophysics Data System (ADS)

    Jónsson, Sigurður Ægir; Staneva, Iskra; Mohanty, Sandipan; Irbäck, Anders

    The disease-linked amyloid β (Aβ) and α-synuclein (αS) proteins are both fibril-forming and natively unfolded in free monomeric form. Here, we discuss two recent studies, where we used extensive implicit solvent all-atom Monte Carlo (MC) simulations to elucidate the conformational ensembles sampled by these proteins. For αS, we somewhat unexpectedly observed two distinct phases, separated by a clear free-energy barrier. The presence of the barrier makes αS, with 140 residues, a challenge to simulate. By using a two-step simulation procedure based on flat-histogram techniques, it was possible to alleviate this problem. The barrier may in part explain why fibril formation is much slower for αS than it is for Aβ

  3. Nuclear reactions in Monte Carlo codes.

    PubMed

    Ferrari, A; Sala, P R

    2002-01-01

    The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references.

  4. Vectorization of Monte Carlo particle transport

    SciTech Connect

    Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V. . Computer Center; Los Alamos National Lab., NM; Supercomputing Research Center, Bowie, MD )

    1989-01-01

    Fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP. 32 refs., 12 figs., 1 tab.

  5. Monte Carlo stratified source-sampling

    SciTech Connect

    Blomquist, R.N.; Gelbard, E.M.

    1997-09-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo {open_quotes}eigenvalue of the world{close_quotes} problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress.

  6. Experimental Monte Carlo Quantum Process Certification

    NASA Astrophysics Data System (ADS)

    Steffen, L.; da Silva, M. P.; Fedorov, A.; Baur, M.; Wallraff, A.

    2012-06-01

    Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data postprocessing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data are compared directly with an ideal process using Monte Carlo sampling. Here, we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of 2-qubit gates, such as the CPHASE and the CNOT gate, and 3-qubit gates, such as the Toffoli gate and two sequential CPHASE gates.

  7. San Carlos Apache Tribe - Energy Organizational Analysis

    SciTech Connect

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  8. MORSE Monte Carlo radiation transport code system

    SciTech Connect

    Emmett, M.B.

    1983-02-01

    This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)

  9. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.

    1998-12-01

    A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.

  10. Monte Carlo simulations of medical imaging modalities

    SciTech Connect

    Estes, G.P.

    1998-09-01

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  11. Coherent scatter imaging Monte Carlo simulation.

    PubMed

    Hassan, Laila; MacDonald, Carolyn A

    2016-07-01

    Conventional mammography can suffer from poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter slot scan imaging is an imaging technique which provides additional information and is compatible with conventional mammography. A Monte Carlo simulation of coherent scatter slot scan imaging was performed to assess its performance and provide system optimization. Coherent scatter could be exploited using a system similar to conventional slot scan mammography system with antiscatter grids tilted at the characteristic angle of cancerous tissues. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The simulated carcinomas were detectable for tumors as small as 5 mm in diameter, so coherent scatter analysis using a wide-slot setup could be promising as an enhancement for screening mammography. Employing coherent scatter information simultaneously with conventional mammography could yield a conventional high spatial resolution image with additional coherent scatter information. PMID:27610397

  12. Monte Carlo Simulation of Endlinking Oligomers

    NASA Technical Reports Server (NTRS)

    Hinkley, Jeffrey A.; Young, Jennifer A.

    1998-01-01

    This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.

  13. Exploring theory space with Monte Carlo reweighting

    SciTech Connect

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  14. Total Monte Carlo evaluation for dose calculations.

    PubMed

    Sjöstrand, H; Alhassan, E; Conroy, S; Duan, J; Hellesen, C; Pomp, S; Österlund, M; Koning, A; Rochman, D

    2014-10-01

    Total Monte Carlo (TMC) is a method to propagate nuclear data (ND) uncertainties in transport codes, by using a large set of ND files, which covers the ND uncertainty. The transport code is run multiple times, each time with a unique ND file, and the result is a distribution of the investigated parameter, e.g. dose, where the width of the distribution is interpreted as the uncertainty due to ND. Until recently, this was computer intensive, but with a new development, fast TMC, more applications are accessible. The aim of this work is to test the fast TMC methodology on a dosimetry application and to propagate the (56)Fe uncertainties on the predictions of the dose outside a proposed 14-MeV neutron facility. The uncertainty was found to be 4.2 %. This can be considered small; however, this cannot be generalised to all dosimetry applications and so ND uncertainties should routinely be included in most dosimetry modelling.

  15. Exploring theory space with Monte Carlo reweighting

    DOE PAGESBeta

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  16. Monte Carlo modeling and meteor showers

    NASA Technical Reports Server (NTRS)

    Kulikova, N. V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  17. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  18. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  19. A Monte Carlo simulation approach for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Agili, Hachem; Chokmani, Karem; Oubennaceur, Khalid; Poulin, Jimmy; Marceau, Pascal

    2016-04-01

    Floods are the most frequent natural disaster and the most damaging in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which hit the region in 2011 causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects of buildings has been developed. This approach integrates three main components namely hydrological modeling through flow-probability functions, hydraulic modeling using flow-submersion height functions and the study of buildings damage based on damage functions adapted to the Quebec habitat. The application of this approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for local authorities to support their decisions on risk management and prevention against this disaster.

  20. Monte Carlo sampling from the quantum state space. I

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Seah, Yi-Lin; Khoon Ng, Hui; Nott, David John; Englert, Berthold-Georg

    2015-04-01

    High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the standard strategies of rejection sampling, importance sampling, and Markov-chain sampling can be adapted to this context, where the samples must obey the constraints imposed by the positivity of the statistical operator. For illustration, we generate sample points in the probability space of qubits, qutrits, and qubit pairs, both for tomographically complete and incomplete measurements. We use these samples for various purposes: establish the marginal distribution of the purity; compute the fractional volume of separable two-qubit states; and calculate the size of regions with bounded likelihood.

  1. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  2. Markov chain Monte Carlo methods: an introductory example

    NASA Astrophysics Data System (ADS)

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  3. abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

    NASA Astrophysics Data System (ADS)

    Akeret, Joel

    2015-04-01

    abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

  4. 4. Photographic copy of map. San Carlos Irrigation Project, Gila ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. Photographic copy of map. San Carlos Irrigation Project, Gila River Indian Reservation, Pinal County, Arizona. Department of the Interior. Office of Indian Affairs. 1940. (Source: SCIP Office, Coolidge, AZ) Photograph is an 8'x10' enlargement from a 4'x5' negative. - San Carlos Irrigation Project, Lands North & South of Gila River, Coolidge, Pinal County, AZ

  5. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  6. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  7. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  8. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  9. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  10. QWalk: A quantum Monte Carlo program for electronic structure

    SciTech Connect

    Wagner, Lucas K. Bajdich, Michal Mitas, Lubos

    2009-05-20

    We describe QWalk, a new computational package capable of performing quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site (http://www.qwalk.org)

  11. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  12. CARLOS: Computer-Assisted Instruction in Spanish at Dartmouth College.

    ERIC Educational Resources Information Center

    Turner, Ronald C.

    The computer-assisted instruction project in review Spanish, Computer-Assisted Review Lessons on Syntax (CARLOS), initiated at Dartmouth College in 1967-68, is described here. Tables are provided showing the results of the experiment on the basis of aptitude and achievement tests, and the procedure for implementing CARLOS as well as its place in…

  13. Recent Developments in Quantum Monte Carlo: Methods and Applications

    NASA Astrophysics Data System (ADS)

    Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.

    2007-12-01

    The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.

  14. Adjoint electron-photon transport Monte Carlo calculations with ITS

    SciTech Connect

    Lorence, L.J.; Kensek, R.P.; Halbleib, J.A.; Morel, J.E.

    1995-02-01

    A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated.

  15. Monte Carlo Test Assembly for Item Pool Analysis and Extension

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2005-01-01

    A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…

  16. Quantum Monte Carlo using a Stochastic Poisson Solver

    SciTech Connect

    Das, D; Martin, R M; Kalos, M H

    2005-05-06

    Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.

  17. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  18. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  19. Biopolymer structure simulation and optimization via fragment regrowth Monte Carlo.

    PubMed

    Zhang, Jinfeng; Kou, S C; Liu, Jun S

    2007-06-14

    An efficient exploration of the configuration space of a biopolymer is essential for its structure modeling and prediction. In this study, the authors propose a new Monte Carlo method, fragment regrowth via energy-guided sequential sampling (FRESS), which incorporates the idea of multigrid Monte Carlo into the framework of configurational-bias Monte Carlo and is suitable for chain polymer simulations. As a by-product, the authors also found a novel extension of the Metropolis Monte Carlo framework applicable to all Monte Carlo computations. They tested FRESS on hydrophobic-hydrophilic (HP) protein folding models in both two and three dimensions. For the benchmark sequences, FRESS not only found all the minimum energies obtained by previous studies with substantially less computation time but also found new lower energies for all the three-dimensional HP models with sequence length longer than 80 residues.

  20. Monte Carlo Simulation of Secondary Electron Emission from Dielectric Targets

    NASA Astrophysics Data System (ADS)

    Dapor, Maurizio

    2012-12-01

    In modern physics we are interested in systems with many degrees of freedom. The Monte Carlo (MC) method gives us a very accurate way to calculate definite integrals of high dimension: it evaluates the integrand at a random sampling of abscissa. MC is also used for evaluating the many physical quantities necessary to the study of the interactions of particle-beams with solid targets. Letting the particles carry out an artificial random walk and taking into account the effect of the single collisions, it is possible to accurately evaluate the diffusion process. Secondary electron emission is a process where primary incident electrons impinging on a surface induce the emission of secondary electrons. The number of secondary electrons emitted divided by the number of the incident electrons is the so-called secondary electron emission yield. The secondary electron emission yield is conventionally measured as the integral of the secondary electron energy distribution in the emitted electron energy range from 0 to 50eV. The problem of the determination of secondary electron emission from solids irradiated by a particle beam is of crucial importance, especially in connection with the analytical techniques that utilize secondary electrons to investigate chemical and compositional properties of solids in the near surface layers. Secondary electrons are used for imaging in scanning electron microscopes, with applications ranging from secondary electron doping contrast in p-n junctions, line-width measurement in critical-dimension scanning electron microscopy, to the study of biological samples. In this work, the main mechanisms of scattering and energy loss of electrons scattered in dielectric materials are briefly treated. The present MC scheme takes into account all the single energy losses suffered by each electron in the secondary electron cascade, and is rather accurate for the calculation of the secondary electron yield and energy distribution as well.

  1. Protecting marine biodiversity to preserve ecosystem functioning: A tribute to Carlo Heip

    NASA Astrophysics Data System (ADS)

    Herman, Peter; Warwick, Richard; Aller, Robert; Arvanitidis, Christos; Hewitt, Judi; Stal, Lucas; Vincx, Magda

    2015-04-01

    Carlo Heip was the highly respected Editor-In-Chief of the Journal of Sea Research until his untimely death on 15 February 2013. As a tribute, the Journal wished to organize a special volume in his honour, the scope of which would provide an overview of the current state of affairs and the future outlook of marine biodiversity, a field of research to which Carlo made a major contribution. The volume places special emphasis on how marine biodiversity links to ecosystem functioning. Authors were invited to address such issues as: Which ecosystem functions are vulnerable to loss of biodiversity and how is the relation causally structured? How do trophic and non-trophic networks in ecosystems function and how do they depend on biodiversity? What is the role of spatial structuring for biodiversity? What is the role of biodiversity in biogeochemical fluxes at different scales? What are the new frontiers in the study of marine biodiversity and how can functional aspects be integrated in them? In this approach we wanted to cover a broad range of organisms reflecting Carlo's interests, the whole marine area from coastal systems to the deep sea, and spatial scales from single locations to worldwide databases.

  2. Monte Carlo simulations on the Lefschetz thimble: Taming the sign problem

    NASA Astrophysics Data System (ADS)

    Cristoforetti, Marco; Di Renzo, Francesco; Mukherjee, Abhishek; Scorzato, Luigi

    2013-09-01

    We present the first practical Monte Carlo calculations of the recently proposed Lefschetz thimble formulation of quantum field theories. Our results provide strong evidence that the numerical sign problem that afflicts Monte Carlo calculations of models with complex actions can be softened significantly by changing the domain of integration to the Lefschetz thimble or approximations thereof. We study the interacting complex scalar field theory (relativistic Bose gas) in lattices of size up to 84 using a computationally inexpensive approximation of the Lefschetz thimble. Our results are in excellent agreement with known results. We show that—at least in the case of the relativistic Bose gas—the thimble can be systematically approached and the remaining residual phase leads to a much more tractable sign problem (if at all) than the original formulation. This is especially encouraging in view of the wide applicability—in principle—of our method to quantum field theories with a sign problem. We believe that this opens up new possibilities for accurate Monte Carlo calculations in strongly interacting systems of sizes much larger that previously possible.

  3. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  4. Top Quark Mass Measurement in the lepton+jets Channel Using a Matrix Element Method and in situ Jet Energy Calibration

    SciTech Connect

    Aaltonen, T.; Brucken, E.; Devoto, F.; Mehtala, P.; Orava, R.; Alvarez Gonzalez, B.; Casal, B.; Gomez, G.; Palencia, E.; Rodrigo, T.; Ruiz, A.; Scodellaro, L.; Vila, I.; Vilar, R.; Amerio, S.; Dorigo, T.; Gresele, A.; Lazzizzera, I.; Amidei, D.; Campbell, M.

    2010-12-17

    A precision measurement of the top quark mass m{sub t} is obtained using a sample of tt events from pp collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate the jet energy scale in situ. Using a total of 1087 events in 5.6 fb{sup -1} of integrated luminosity, a value of m{sub t}=173.0{+-}1.2 GeV/c{sup 2} is measured.

  5. Top Quark Mass Measurement in the lepton+jets Channel Using a Matrix Element Method and in situ Jet Energy Calibration

    NASA Astrophysics Data System (ADS)

    Aaltonen, T.; Álvarez González, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J. A.; Apresyan, A.; Arisawa, T.; Artikov, A.; Asaadi, J.; Ashmanskas, W.; Auerbach, B.; Aurisano, A.; Azfar, F.; Badgett, W.; Barbaro-Galtieri, A.; Barnes, V. E.; Barnett, B. A.; Barria, P.; Bartos, P.; Bauce, M.; Bauer, G.; Bedeschi, F.; Beecher, D.; Behari, S.; Bellettini, G.; Bellinger, J.; Benjamin, D.; Beretvas, A.; Bhatti, A.; Binkley, M.; Bisello, D.; Bizjak, I.; Bland, K. R.; Blumenfeld, B.; Bocci, A.; Bodek, A.; Bortoletto, D.; Boudreau, J.; Boveia, A.; Brau, B.; Brigliadori, L.; Brisuda, A.; Bromberg, C.; Brucken, E.; Bucciantonio, M.; Budagov, J.; Budd, H. S.; Budd, S.; Burkett, K.; Busetto, G.; Bussey, P.; Buzatu, A.; Calancha, C.; Camarda, S.; Campanelli, M.; Campbell, M.; Canelli, F.; Canepa, A.; Carls, B.; Carlsmith, D.; Carosi, R.; Carrillo, S.; Carron, S.; Casal, B.; Casarsa, M.; Castro, A.; Catastini, P.; Cauz, D.; Cavaliere, V.; Cavalli-Sforza, M.; Cerri, A.; Cerrito, L.; Chen, Y. C.; Chertok, M.; Chiarelli, G.; Chlachidze, G.; Chlebana, F.; Cho, K.; Chokheli, D.; Chou, J. P.; Chung, W. H.; Chung, Y. S.; Ciobanu, C. I.; Ciocci, M. A.; Clark, A.; Compostella, G.; Convery, M. E.; Conway, J.; Corbo, M.; Cordelli, M.; Cox, C. A.; Cox, D. J.; Crescioli, F.; Cuenca Almenar, C.; Cuevas, J.; Culbertson, R.; Dagenhart, D.; D'Ascenzo, N.; Datta, M.; de Barbaro, P.; de Cecco, S.; de Lorenzo, G.; Dell'Orso, M.; Deluca, C.; Demortier, L.; Deng, J.; Deninno, M.; Devoto, F.; D'Errico, M.; di Canto, A.; di Ruzza, B.; Dittmann, J. R.; D'Onofrio, M.; Donati, S.; Dong, P.; Dorigo, T.; Ebina, K.; Elagin, A.; Eppig, A.; Erbacher, R.; Errede, D.; Errede, S.; Ershaidat, N.; Eusebi, R.; Fang, H. C.; Farrington, S.; Feindt, M.; Fernandez, J. P.; Ferrazza, C.; Field, R.; Flanagan, G.; Forrest, R.; Frank, M. J.; Franklin, M.; Freeman, J. C.; Furic, I.; Gallinaro, M.; Galyardt, J.; Garcia, J. E.; Garfinkel, A. F.; Garosi, P.; Gerberich, H.; Gerchtein, E.; Giagu, S.; Giakoumopoulou, V.; Giannetti, P.; Gibson, K.; Ginsburg, C. M.; Giokaris, N.; Giromini, P.; Giunta, M.; Giurgiu, G.; Glagolev, V.; Glenzinski, D.; Gold, M.; Goldin, D.; Goldschmidt, N.; Golossanov, A.; Gomez, G.; Gomez-Ceballos, G.; Goncharov, M.; González, O.; Gorelov, I.; Goshaw, A. T.; Goulianos, K.; Gresele, A.; Grinstein, S.; Grosso-Pilcher, C.; Group, R. C.; Guimaraes da Costa, J.; Gunay-Unalan, Z.; Haber, C.; Hahn, S. R.; Halkiadakis, E.; Hamaguchi, A.; Han, J. Y.; Happacher, F.; Hara, K.; Hare, D.; Hare, M.; Harr, R. F.; Hatakeyama, K.; Hays, C.; Heck, M.; Heinrich, J.; Herndon, M.; Hewamanage, S.; Hidas, D.; Hocker, A.; Hopkins, W.; Horn, D.; Hou, S.; Hughes, R. E.; Hurwitz, M.; Husemann, U.; Hussain, N.; Hussein, M.; Huston, J.; Introzzi, G.; Iori, M.; Ivanov, A.; James, E.; Jang, D.; Jayatilaka, B.; Jeon, E. J.; Jha, M. K.; Jindariani, S.; Johnson, W.; Jones, M.; Joo, K. K.; Jun, S. Y.; Junk, T. R.; Kamon, T.; Karchin, P. E.; Kato, Y.; Ketchum, W.; Keung, J.; Khotilovich, V.; Kilminster, B.; Kim, D. H.; Kim, H. S.; Kim, H. W.; Kim, J. E.; Kim, M. J.; Kim, S. B.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kirby, M.; Klimenko, S.; Kondo, K.; Kong, D. J.; Konigsberg, J.; Kotwal, A. V.; Kreps, M.; Kroll, J.; Krop, D.; Krumnack, N.; Kruse, M.; Krutelyov, V.; Kuhr, T.; Kurata, M.; Kwang, S.; Laasanen, A. T.; Lami, S.; Lammel, S.; Lancaster, M.; Lander, R. L.; Lannon, K.; Lath, A.; Latino, G.; Lazzizzera, I.; Lecompte, T.; Lee, E.; Lee, H. S.; Lee, J. S.; Lee, S. W.; Leo, S.; Leone, S.; Lewis, J. D.; Lin, C.-J.; Linacre, J.; Lindgren, M.; Lipeles, E.; Lister, A.; Litvintsev, D. O.; Liu, C.; Liu, Q.; Liu, T.; Lockwitz, S.; Lockyer, N. S.; Loginov, A.; Lucchesi, D.; Lueck, J.; Lujan, P.; Lukens, P.; Lungu, G.; Lys, J.; Lysak, R.; Madrak, R.; Maeshima, K.; Makhoul, K.; Maksimovic, P.; Malik, S.; Manca, G.; Manousakis-Katsikakis, A.; Margaroli, F.; Marino, C.; Martínez, M.; Martínez-Ballarín, R.; Mastrandrea, P.; Mathis, M.; Mattson, M. E.; Mazzanti, P.; McFarland, K. S.; McIntyre, P.; McNulty, R.; Mehta, A.; Mehtala, P.; Menzione, A.; Mesropian, C.; Miao, T.; Mietlicki, D.; Mitra, A.; Miyake, H.; Moed, S.; Moggi, N.; Mondragon, M. N.; Moon, C. S.; Moore, R.; Morello, M. J.; Morlock, J.; Movilla Fernandez, P.; Mukherjee, A.; Muller, Th.; Murat, P.; Mussini, M.; Nachtman, J.; Nagai, Y.; Naganoma, J.; Nakano, I.; Napier, A.; Nett, J.; Neu, C.; Neubauer, M. S.; Nielsen, J.; Nodulman, L.; Norniella, O.; Nurse, E.; Oakes, L.; Oh, S. H.; Oh, Y. D.; Oksuzian, I.; Okusawa, T.; Orava, R.; Ortolan, L.; Pagan Griso, S.; Pagliarone, C.; Palencia, E.; Papadimitriou, V.; Paramonov, A. A.; Patrick, J.; Pauletta, G.; Paulini, M.; Paus, C.; Pellett, D. E.; Penzo, A.; Phillips, T. J.; Piacentino, G.; Pianori, E.; Pilot, J.; Pitts, K.; Plager, C.; Pondrom, L.; Potamianos, K.; Poukhov, O.; Prokoshin, F.; Pronko, A.; Ptohos, F.; Pueschel, E.; Punzi, G.; Pursley, J.; Rahaman, A.; Ramakrishnan, V.; Ranjan, N.; Redondo, I.; Renton, P.; Rescigno, M.; Rimondi, F.; Ristori, L.; Robson, A.; Rodrigo, T.; Rodriguez, T.; Rogers, E.; Rolli, S.; Roser, R.; Rossi, M.; Rubbo, F.; Ruffini, F.; Ruiz, A.; Russ, J.; Rusu, V.; Safonov, A.; Sakumoto, W. K.; Santi, L.; Sartori, L.; Sato, K.; Saveliev, V.; Savoy-Navarro, A.; Schlabach, P.; Schmidt, A.; Schmidt, E. E.; Schmidt, M. P.; Schmitt, M.; Schwarz, T.; Scodellaro, L.; Scribano, A.; Scuri, F.; Sedov, A.; Seidel, S.; Seiya, Y.; Semenov, A.; Sforza, F.; Sfyrla, A.; Shalhout, S. Z.; Shears, T.; Shepard, P. F.; Shimojima, M.; Shiraishi, S.; Shochet, M.; Shreyber, I.; Siegrist, J.; Simonenko, A.; Sinervo, P.; Sissakian, A.; Sliwa, K.; Smith, J. R.; Snider, F. D.; Soha, A.; Somalwar, S.; Sorin, V.; Squillacioti, P.; Stanitzki, M.; Denis, R. St.; Stelzer, B.; Stelzer-Chilton, O.; Stentz, D.; Strologas, J.; Strycker, G. L.; Sudo, Y.; Sukhanov, A.; Suslov, I.; Takemasa, K.; Takeuchi, Y.; Tang, J.; Tecchio, M.; Teng, P. K.; Thom, J.; Thome, J.; Thompson, G. A.; Thomson, E.; Ttito-Guzmán, P.; Tkaczyk, S.; Toback, D.; Tokar, S.; Tollefson, K.; Tomura, T.; Tonelli, D.; Torre, S.; Torretta, D.; Totaro, P.; Trovato, M.; Tu, Y.; Turini, N.; Ukegawa, F.; Uozumi, S.; Varganov, A.; Vataga, E.; Vázquez, F.; Velev, G.; Vellidis, C.; Vidal, M.; Vila, I.; Vilar, R.; Volobouev, I.; Vogel, M.; Volpi, G.; Wagner, P.; Wagner, R. L.; Wakisaka, T.; Wallny, R.; Wang, S. M.; Warburton, A.; Waters, D.; Weinberger, M.; Wester, W. C., III; Whitehouse, B.; Whiteson, D.; Wicklund, A. B.; Wicklund, E.; Wilbur, S.; Wick, F.; Williams, H. H.; Wilson, J. S.; Wilson, P.; Winer, B. L.; Wittich, P.; Wolbers, S.; Wolfe, H.; Wright, T.; Wu, X.; Wu, Z.; Yamamoto, K.; Yamaoka, J.; Yang, T.; Yang, U. K.; Yang, Y. C.; Yao, W.-M.; Yeh, G. P.; Yi, K.; Yoh, J.; Yorita, K.; Yoshida, T.; Yu, G. B.; Yu, I.; Yu, S. S.; Yun, J. C.; Zanetti, A.; Zeng, Y.; Zucchelli, S.

    2010-12-01

    A precision measurement of the top quark mass mt is obtained using a sample of tt¯ events from pp¯ collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of mt and a parameter ΔJES used to calibrate the jet energy scale in situ. Using a total of 1087 events in 5.6fb-1 of integrated luminosity, a value of mt=173.0±1.2GeV/c2 is measured.

  6. Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo

    SciTech Connect

    Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.

    2014-10-01

    We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.

  7. Monte Carlo sampling of Wigner functions and surface hopping quantum dynamics

    NASA Astrophysics Data System (ADS)

    Kube, Susanna; Lasser, Caroline; Weber, Marcus

    2009-04-01

    The article addresses the achievable accuracy for a Monte Carlo sampling of Wigner functions in combination with a surface hopping algorithm for non-adiabatic quantum dynamics. The approximation of Wigner functions is realized by an adaption of the Metropolis algorithm for real-valued functions with disconnected support. The integration, which is necessary for computing values of the Wigner function, uses importance sampling with a Gaussian weight function. The numerical experiments agree with theoretical considerations and show an error of 2-3%.

  8. Quantum Monte Carlo study of dipolar lattice bosons in the presence of random diagonal disorder

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Safavi-Naini, Arghavan; Capogrosso-Sansone, Barbara

    2015-05-01

    We report the results of our study of dipolar bosons in a two dimensional optical lattice in the presence of random diagonal disorders using Path Integral Quantum Monte Carlo simulations. We study the phase diagram at half filling which features three phases: superfluid, checkerboard solid and bose glass. We observe that, in contrast to the standard Bose-Hubbard model in presence of diagonal disorder, superfluidity is destroyed at considerable lower disorder strengths in favor of the Bose glass phase. Additionally we find that as the disorder strength increases, larger dipolar interaction is required in order to stabilize a checkerboard solid.

  9. Monte Carlo method for determining free-energy differences and transition state theory rate constants

    SciTech Connect

    Voter, A.F.

    1985-02-15

    We present a new Monte Carlo procedure for determining the Helmholtz free-energy difference between two systems that are separated in configuration space. Unlike most standard approaches, no integration over intermediate potentials is required. A Metropolis walk is performed for each system, and the average Metropolis acceptance probability for a hypothetical step along a probe vector into the other system is accumulated. Either classical or quantum free energies may be computed, and the procedure is also ideally suited for evaluating generalized transition state theory rate constants. As an application we determine the relative free energies of three configurations of a tungsten dimer on the W(110) surface.

  10. Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket

    SciTech Connect

    Ilin, A.V.; Diaz, F.R.C.; Squire, J.P.

    1999-01-01

    The self-consistent mathematical model in a Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is examined. Of particular importance is the effect of a magnetic nozzle in enhancing the axial momentum of the exhaust. Also, different geometries and rocket symmetries are considered. The magnetic configuration is modeled with an adaptable mesh, which increases accuracy without compromising the speed of the simulation. The single particle trajectories are integrated with an adaptive time-scheme, which can quickly solve extensive Monte-Carlo simulations for systems of hundred thousands of particles in a reasonable time (1--2 hours) and without the need for a powerful supercomputer.

  11. Recent advances and future prospects for Monte Carlo

    SciTech Connect

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

  12. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  13. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  14. Diffusion Monte Carlo in internal coordinates.

    PubMed

    Petit, Andrew S; McCoy, Anne B

    2013-08-15

    An internal coordinate extension of diffusion Monte Carlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional.

  15. Biofilm growth: a lattice Monte Carlo model

    NASA Astrophysics Data System (ADS)

    Tao, Yuguo; Slater, Gary

    2011-03-01

    Biofilms are complex colonies of bacteria that grow in contact with a wall, often in the presence of a flow. In the current work, biofilm growth is investigated using a new two-dimensional lattice Monte Carlo algorithm based on the Bond-Fluctuation Algorithm (BFA). One of the distinguishing characteristics of biofilms, the synthesis and physical properties of the extracellular polymeric substance (EPS) in which the cells are embedded, is explicitly taken into account. Cells are modelled as autonomous closed loops with well-defined mechanical and thermodynamic properties, while the EPS is modelled as flexible polymeric chains. This BFA model allows us to add biologically relevant features such as: the uptake of nutrients; cell growth, division and death; the production of EPS; cell maintenance and hibernation; the generation of waste and the impact of toxic molecules; cell mutation and evolution; cell motility. By tuning the structural, interactional and morphologic parameters of the model, the cell shapes as well as the growth and maturation of various types of biofilm colonies can be controlled.

  16. Monte Carlo Approach To Gomos Ozone Retrieval

    NASA Astrophysics Data System (ADS)

    Tamminen, J.; Kyrölä, E.

    Satellite measurements of the atmosphere are non-direct and therefore the data pro- cessing requires inverse methods. In this paper we apply the Bayesian approach and use the Markov chain Monte Carlo (MCMC) method for solving the retrieval problem of GOMOS mesurements. With the MCMC method we are able to compute the true nonlinear posterior distribution of the solution without linearizing the problem. The MCMC technique can easily be implemented in a great variety of retrieval prob- lems including nonlinear problems with various prior or noise structures. Therefore, MCMC methods, though somewhat slow for operational processing of large amounts of data, provide excellent tools for development and validation purposes. Moreover, when the signal-to-noise ratio is poor the MCMC methods can be used to find even the faintest fingerprints of the absorbers in the signal. The MCMC methods, and especially the reversible jump MCMC can also be used in problems where the dimension of the model space is unknown. We will discuss the possibility of using MCMC approach also in a model selection problem, namely, for choosing the model for the wavelength dependence of the aerosol cross sections and studying the optimal constituent set to be retrieved.

  17. Monte Carlo Simulation of River Meander Modelling

    NASA Astrophysics Data System (ADS)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  18. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  19. Monte Carlo Production Management at CMS

    NASA Astrophysics Data System (ADS)

    Boudoul, G.; Franzoni, G.; Norkus, A.; Pol, A.; Srimanobhas, P.; Vlimant, J.-R.

    2015-12-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the web- based service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.

  20. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  1. Commensurabilities between ETNOs: a Monte Carlo survey

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-07-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  2. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-06-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of the four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal anti-alignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  3. The INTEGRAL scatterometer SPI

    NASA Technical Reports Server (NTRS)

    Mandrou, P.; Vedrenne, G.; Jean, P.; Kandel, B.; vonBallmoos, P.; Albernhe, F.; Lichti, G.; Schoenfelder, V.; Diehl, R.; Georgii, R.; Teegarden, B.; Mandrou, P.; Vedrenne, G.; Kirchner, T.; Durouchoux, P.; Cordier, B.; Diallo, N.; Sanchez, F.; Payne, B.; Leleux, P.; Caraveo, P.; Matteson, J.; Slassi-Sennon, S.; Lin, R. P.; Skinner, G.

    1997-01-01

    The INTErnational Gamma Ray Astrophysics Laboratory (INTEGRAL) mission's onboard spectrometer, the INTEGRAL spectrometer (SPI), is described. The SPI constitutes one of the four main mission instruments. It is optimized for detailed measurements of gamma ray lines and for the mapping of diffuse sources. It combines a coded aperture mask with an array of large volume, high purity germanium detectors. The detectors make precise measurements of the gamma ray energies over the 20 keV to 8 MeV range. The instrument's characteristics are described and the Monte Carlo simulation of its performance is outlined. It will be possible to study gamma ray emission from compact objects or line profiles with a high energy resolution and a high angular resolution.

  4. Exponential Monte Carlo Convergence on a Homogeneous Right Parallelepiped Using the Reduced Source Method with Legendre Expansion

    SciTech Connect

    Favorite, J.A.

    1999-09-01

    In previous work, exponential convergence of Monte Carlo solutions using the reduced source method with Legendre expansion has been achieved only in one-dimensional rod and slab geometries. In this paper, the method is applied to three-dimensional (right parallelepiped) problems, with resulting evidence suggesting success. As implemented in this paper, the method approximates an angular integral of the flux with a discrete-ordinates numerical quadrature. It is possible that this approximation introduces an inconsistency that must be addressed.

  5. Monte Carlo Simulation in the Optimization of a Free-Air Ionization Chamber for Dosimetric Control in Medical Digital Radiography

    NASA Astrophysics Data System (ADS)

    Leyva, A.; Piñera, I.; Montaño, L. M.; Abreu, Y.; Cruz, C. M.

    2008-08-01

    During the earliest tests of a free-air ionization chamber a poor response to the X-rays emitted by several sources was observed. Then, the Monte Carlo simulation of X-rays transport in matter was employed in order to evaluate chamber behavior as X-rays detector. The photons energy deposition dependence with depth and its integral value in all active volume were calculated. The obtained results reveal that the designed device geometry is feasible to be optimized.

  6. Extension of the fully coupled Monte Carlo/S sub N response matrix method to problems including upscatter and fission

    SciTech Connect

    Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )

    1991-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

  7. Monte Carlo techniques for analyzing deep penetration problems

    SciTech Connect

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.

  8. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  9. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  10. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  11. COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT

    SciTech Connect

    W. R. MARTIN; F. B. BROWN

    2001-03-01

    Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.

  12. Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE

    SciTech Connect

    Bekar, Kursat B; Celik, Cihangir; Wiarda, Dorothea; Peplow, Douglas E.; Rearden, Bradley T; Dunn, Michael E

    2013-01-01

    Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.

  13. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kúcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur

  14. Monte Carlo Hybrid Applied to Binary Stochastic Mixtures

    2008-08-11

    The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.

  15. A Particle Population Control Method for Dynamic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony

    2014-06-01

    A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.

  16. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  17. Monte Carlo methods and applications in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  18. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  19. de Finetti Priors using Markov chain Monte Carlo computations

    PubMed Central

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-01-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases. PMID:26412947

  20. Monte Carlo study of microdosimetric diamond detectors

    NASA Astrophysics Data System (ADS)

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-01

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.

  1. Monte Carlo study of microdosimetric diamond detectors.

    PubMed

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-21

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy. PMID:26309235

  2. Monte Carlo simulation of large electron fields.

    PubMed

    Faddegon, Bruce A; Perl, Joseph; Asai, Makoto

    2008-03-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

  3. Monte Carlo simulation of large electron fields

    PubMed Central

    Faddegon, Bruce A; Perl, Joseph; Asai, Makoto

    2010-01-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different “physics lists,” were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy. PMID:18296775

  4. Monte Carlo simulation of large electron fields

    NASA Astrophysics Data System (ADS)

    Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto

    2008-03-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

  5. Monte Carlo study of microdosimetric diamond detectors.

    PubMed

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-21

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.

  6. Monte Carlo simulations for spinodal decomposition

    SciTech Connect

    Sander, E.; Wanner, T.

    1999-06-01

    This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, the authors are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u{sub 0} {equivalent_to} {mu} in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u{sub 0}. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, the authors numerically compare these two mathematical approaches. In fact, they are able to synthesize the understanding they gain from the numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, they can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. The approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter {var_epsilon} of the governing equation. The authors give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. They observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. They explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. They also describe the dynamics of these exceptional solutions.

  7. Monte-Carlo simulation of Callisto's exosphere

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.

    2015-12-01

    We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.

  8. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  9. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular

  10. Perturbation Monte Carlo methods for tissue structure alterations.

    PubMed

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

    2013-01-01

    This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.

  11. Monte Carlo simulation of energy-dispersive x-ray fluorescence and applications

    NASA Astrophysics Data System (ADS)

    Li, Fusheng

    Four key components with regards to Monte Carlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate Monte Carlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated Monte Carlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible and flexible benchmark experiment setup. All these developments or upgrades enable the MCLLS approach to be a useful and powerful tool for a tremendous variety of elemental analysis applications. CEARXRF, a comprehensive and accurate Monte Carlo code for simulating the total and individual library spectral responses of all elements, has been recently upgraded to version 5 by the author. The new version has several key improvements: input file format fully compatible with MCNP5, a new efficient general geometry tracking code, versatile source definitions, various variance reduction techniques (e.g. weight window mesh and splitting, stratifying sampling, etc.), a new cross section data storage and accessing method which improves the simulation speed by a factor of four and new cross section data, upgraded differential operators (DO) calculation capability, and also an updated coincidence sampling scheme which including K-L and L-L coincidence X-Rays, while keeping all the capabilities of the previous version. The new Differential Operators method is powerful for measurement sensitivity study and system optimization. For our Monte Carlo EDXRF elemental analysis system, it becomes an important technique for quantifying the matrix effect in near real time when combined with the MCLLS approach. An integrated visualization GUI system has been developed by the author to perform elemental analysis using iterated Library Least-Squares method for various samples when an initial guess is provided. This software was built on the Borland C++ Builder

  12. Verification and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code

    SciTech Connect

    Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A

    2004-12-09

    Verification and Validation (V&V) is a critical phase in the development cycle of any scientific code. The aim of the V&V process is to determine whether or not the code fulfills and complies with the requirements that were defined prior to the start of the development process. While code V&V can take many forms, this paper concentrates on validation of the results obtained from a modern code against those produced by a validated, legacy code. In particular, the neutron transport capabilities of the modern Monte Carlo code MERCURY are validated against those in the legacy Monte Carlo code TART. The results from each code are compared for a series of basic transport and criticality calculations which are designed to check a variety of code modules. These include the definition of the problem geometry, particle tracking, collisional kinematics, sampling of secondary particle distributions, and nuclear data. The metrics that form the basis for comparison of the codes include both integral quantities and particle spectra. The use of integral results, such as eigenvalues obtained from criticality calculations, is shown to be necessary, but not sufficient, for a comprehensive validation of the code. This process has uncovered problems in both the transport code and the nuclear data processing codes which have since been rectified.

  13. Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume

    2016-03-01

    Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.

  14. Fast Monte Carlo, slow protein kinetics and perfect loop closure

    NASA Astrophysics Data System (ADS)

    Wedemeyer, William Joseph

    This thesis presents experimental studies of proteins and computational methods which may help in simulations of proteins. The experimental chapters focus on the folding and unfolding of bovine pancreatic ribonuclease A. Methods are developed for tracking the cis-trans isomerization of individual prolines under folding and unfolding conditions, and for identifying critical folding structures by assessing the effects of individual incorrect X-Pro isomers on the conformational folding. The major β-hairpin region is identified as more critical than the C-terminal hydrophobic core. Site- directed mutagenesis of three nearby tyrosines to phenylalanine indicates that tyrosyl hydrogen bonds are essential to rapid conformational folding. Another experimental chapter presents an analytic solution of the kinetics of competitive binding, which is applied to estimating the association and dissociation rate constants of hirudin and thrombin. An extension of this method is proposed to obtain kinetic rate constants for the conformational folding and unfolding of individual parts of a protein. The analytic solution is found to be roughly one-hundred-fold more efficient than the best numerical integrators. The theoretical chapters present methods potentially useful in protein simulations. The loop closure problem is solved geometrically, allowing the protein to be broken into segments which move quasi-independently. Two bootstrap Monte Carlo methods are developed for sampling functions that are characterized by high anisotropy, e.g. long, narrow valleys. Two chapters are devoted to smoothing methods; the first develops a method for exploiting smoothing to evaluate the energy in order N (not N2) time, while the second examines the limitations of one smoothing method, the Diffusion Equation Method, and suggests improvements to its smoothing transformation and reversing procedure. One chapter develops a highly optimized simulation package for lattice heteropolymers by careful choice

  15. Reverse Monte Carlo ray-tracing for radiative heat transfer in combustion systems

    NASA Astrophysics Data System (ADS)

    Sun, Xiaojing

    Radiative heat transfer is a dominant heat transfer phenomenon in high temperature systems. With the rapid development of massive supercomputers, the Monte-Carlo ray tracing (MCRT) method starts to see its applications in combustion systems. This research is to find out if Monte-Carlo ray tracing can offer more accurate and efficient calculations than the discrete ordinates method (DOM). Monte-Carlo ray tracing method is a statistical method that traces the history of a bundle of rays. It is known as solving radiative heat transfer with almost no approximation. It can handle nonisotropic scattering and nongray gas mixtures with relative ease compared to conventional methods, such as DOM and spherical harmonics method, etc. There are two schemes in Monte-Carlo ray tracing method: forward and backward/reverse. Case studies and the governing equations demonstrate the advantages of reverse Monte-Carlo ray tracing (RMCRT) method. The RMCRT can be easily implemented for domain decomposition parallelism. In this dissertation, different efficiency improvements techniques for RMCRT are introduced and implemented. They are the random number generator, stratified sampling, ray-surface intersection calculation, Russian roulette, and important sampling. There are two major modules in solving the radiative heat transfer problems: the RMCRT RTE solver and the optical property models. RMCRT is first fully verified in gray, scattering, absorbing and emitting media with black/nonblack, diffuse/nondiffuse bounded surface problems. Sensitivity analysis is carried out with regard to the ray numbers, the mesh resolutions of the computational domain, optical thickness of the media and effects of variance reduction techniques (stratified sampling, Russian roulette). Results are compared with either analytical solutions or benchmark results. The efficiency (the product of error and computation time) of RMCRT has been compared to DOM and suggest great potential for RMCRT's application

  16. Discrete Coherent State Path Integrals

    NASA Astrophysics Data System (ADS)

    Marchioro, Thomas L., II

    1990-01-01

    The quantum theory provides a fundamental understanding of the physical world; however, as the number of degrees of freedom rises, the information required to specify quantum wavefunctions grows geometrically. Because basis set expansions mirror this geometric growth, a strict practical limit on quantum mechanics as a numerical tool arises, specifically, three degrees of freedom or fewer. Recent progress has been made utilizing Feynman's Path Integral formalism to bypass this geometric growth and instead calculate time -dependent correlation functions directly. The solution of the Schrodinger equation is converted into a large dimensional (formally infinite) integration, which can then be attacked with Monte Carlo techniques. To date, work in this area has concentrated on developing sophisticated mathematical algorithms for evaluating the highly oscillatory integrands occurring in Feynman Path Integrals. In an alternative approach, this work demonstrates two formulations of quantum dynamics for which the number of mathematical operations does not scale geometrically. Both methods utilize the Coherent State basis of quantum mechanics. First, a localized coherent state basis set expansion and an approximate short time propagator are developed. Iterations of the short time propagator lead to the full quantum dynamics if the coherent state basis is sufficiently dense along the classical phase space path of the system. Second, the coherent state path integral is examined in detail. For a common class of Hamiltonians, H = p^2/2 + V( x) the path integral is reformulated from a phase space-like expression into one depending on (q,dot q). It is demonstrated that this new path integral expression contains localized damping terms which can serve as a statistical weight for Monte Carlo evaluation of the integral--a process which scales approximately linearly with the number of degrees of freedom. Corrections to the traditional coherent state path integral, inspired by a

  17. Finding organic vapors - a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku

    2010-05-01

    drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.

  18. Probability Forecasting Using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Frisbee, J.; Wysack, J.

    2014-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a

  19. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  20. A detection method of vegetable oils in edible blended oil based on three-dimensional fluorescence spectroscopy technique.

    PubMed

    Xu, Jing; Liu, Xiao-Fei; Wang, Yu-Tian

    2016-12-01

    Edible blended vegetable oils are made from two or more refined oils. Blended oils can provide a wider range of essential fatty acids than single vegetable oils, which helps support good nutrition. Nutritional components in blended oils are related to the type and content of vegetable oils used, and a new, more accurate, method is proposed to identify and quantify the vegetable oils present using cluster analysis and a Quasi-Monte Carlo integral. Three-dimensional fluorescence spectra were obtained at 250-400nm (excitation) and 260-750nm (emission). Mixtures of sunflower, soybean and peanut oils were used as typical examples to validate the effectiveness of the method. PMID:27374508

  1. Top Quark Mass Measurement in the Lepton + Jets Channel Using a Matrix Element Method and \\textit{in situ} Jet Energy Calibration

    SciTech Connect

    Aaltonen, T.; Alvarez Gonzalez, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.A.; Apresyan, A.; Arisawa, T.; /Waseda U. /Dubna, JINR

    2010-10-01

    A precision measurement of the top quark mass m{sub t} is obtained using a sample of t{bar t} events from p{bar p} collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate the jet energy scale in situ. Using a total of 1087 events, a value of m{sub t} = 173.0 {+-} 1.2 GeV/c{sup 2} is measured.

  2. Influence of measurement geometry on the estimate of 131(I) activity in the thyroid: Monte Carlo simulation of a detector and a phantom.

    PubMed

    Ulanovsky, A V; Minenko, V F; Korneev, S V

    1997-01-01

    An approach for evaluating the influence of measurement geometry on estimates of 131(I) in the thyroid from measurements with survey meters was developed using Monte Carlo simulation of radiation transport in the human body and the radiation detector. The modified Monte Carlo code, EGS4, including a newly developed mathematical model of detector, thyroid gland, and neck, was used for the computations. The approach was tested by comparing calculated and measured differential and integral detector characteristics. This procedure was applied to estimate uncertainties in direct thyroid-measurement results due to geometrical errors.

  3. Influence of measurement geometry on the estimate of {sup 131}I activity in the thyroid: Monte Carlo simulation of a detector and a phantom

    SciTech Connect

    Ulanovsky, A.V.; Minenko, V.F.; Korneev, S.V.

    1997-01-01

    An approach for evaluating the influence of measurement geometry on estimates of {sup 131}I in the thyroid from measurements with survey meters was developed using Monte Carlo simulation of radiation transport in the human body and the radiation detector. The modified Monte Carlo code, EGS4, including a newly developed mathematical model of detector, thyroid gland, and neck, was used for the computations. The approach was tested by comparing calculated and measured differential and integral detector characteristics. This procedure was applied to estimate uncertainties in direct thyroid-measurement results due to geometrical errors. 14 refs., 11 figs., 4 tabs.

  4. Reliability Impact of Stockpile Aging: Stress Voiding

    SciTech Connect

    ROBINSON,DAVID G.

    1999-10-01

    The objective of this research is to statistically characterize the aging of integrated circuit interconnects. This report supersedes the stress void aging characterization presented in SAND99-0975, ''Reliability Degradation Due to Stockpile Aging,'' by the same author. The physics of the stress voiding, before and after wafer processing have been recently characterized by F. G. Yost in SAND99-0601, ''Stress Voiding during Wafer Processing''. The current effort extends this research to account for uncertainties in grain size, storage temperature, void spacing and initial residual stress and their impact on interconnect failure after wafer processing. The sensitivity of the life estimates to these uncertainties is also investigated. Various methods for characterizing the probability of failure of a conductor line were investigated including: Latin hypercube sampling (LHS), quasi-Monte Carlo sampling (qMC), as well as various analytical methods such as the advanced mean value (Ah/IV) method. The comparison was aided by the use of the Cassandra uncertainty analysis library. It was found that the only viable uncertainty analysis methods were those based on either LHS or quasi-Monte Carlo sampling. Analytical methods such as AMV could not be applied due to the nature of the stress voiding problem. The qMC method was chosen since it provided smaller estimation error for a given number of samples. The preliminary results indicate that the reliability of integrated circuits due to stress voiding is very sensitive to the underlying uncertainties associated with grain size and void spacing. In particular, accurate characterization of IC reliability depends heavily on not only the frost and second moments of the uncertainty distribution, but more specifically the unique form of the underlying distribution.

  5. Frequency domain optical tomography using a Monte Carlo perturbation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  6. An unbiased Hessian representation for Monte Carlo PDFs

    NASA Astrophysics Data System (ADS)

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    2015-08-01

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  7. Integrated Means Integrity

    ERIC Educational Resources Information Center

    Odegard, John D.

    1978-01-01

    Describes the operation of the Cessna Pilot Center (CPC) flight training systems. The program is based on a series of integrated activities involving stimulus, response, reinforcement and association components. Results show that the program can significantly reduce in-flight training time. (CP)

  8. Development of a method for calibrating in vivo measurement systems using magnetic resonance imaging and Monte Carlo computations

    SciTech Connect

    Mallett, M.W.; Poston, J.W.; Hickman, D.P.

    1995-06-01

    Research efforts towards developing a new method for calibrating in vivo measurement systems using magnetic resonance imaging (MRI) and Monte Carlo computations are discussed. The method employs the enhanced three-point Dixon technique for producing pure fat and pure water MR images of the human body. The MR images are used to define the geometry and composition of the scattering media for transport calculations using the general-purpose Monte Carlo code MCNP, Version 4. A sample case for developing the new method utilizing an adipose/muscle matrix is compared with laboratory measurements. Verification of the integrated MRI-MCNP method has been done for a specially designed phantom composed of fat, water, air, and a bone-substitute material. Implementation of the MRI-MCNP method is demonstrated for a low-energy, lung counting in vivo measurement system. Limitations and solutions regarding the presented method are discussed. 15 refs., 7 figs., 4 tabs.

  9. A Monte Carlo simulation based two-stage adaptive resonance theory mapping approach for offshore oil spill vulnerability index classification.

    PubMed

    Li, Pu; Chen, Bing; Li, Zelin; Zheng, Xiao; Wu, Hongjing; Jing, Liang; Lee, Kenneth

    2014-09-15

    In this paper, a Monte Carlo simulation based two-stage adaptive resonance theory mapping (MC-TSAM) model was developed to classify a given site into distinguished zones representing different levels of offshore Oil Spill Vulnerability Index (OSVI). It consisted of an adaptive resonance theory (ART) module, an ART Mapping module, and a centroid determination module. Monte Carlo simulation was integrated with the TSAM approach to address uncertainties that widely exist in site conditions. The applicability of the proposed model was validated by classifying a large coastal area, which was surrounded by potential oil spill sources, based on 12 features. Statistical analysis of the results indicated that the classification process was affected by multiple features instead of one single feature. The classification results also provided the least or desired number of zones which can sufficiently represent the levels of offshore OSVI in an area under uncertainty and complexity, saving time and budget in spill monitoring and response. PMID:25044043

  10. Program system for three-dimensional coupled Monte Carlo-deterministic shielding analysis with application to the accelerator-based IFMIF neutron source

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Fischer, U.

    2005-10-01

    A program system for three-dimensional coupled Monte Carlo-deterministic shielding analysis has been developed to solve problems with complex geometry and bulk shield by integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a coupling interface program. A newly-proposed mapping approach is implemented in the interface program to calculate the angular flux distribution from the scored Monte Carlo particle tracks and generate the boundary source file for the use of TORT. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between the results calculated by these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities.

  11. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  12. Efficiency of Monte Carlo sampling in chaotic systems.

    PubMed

    Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G

    2014-11-01

    In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.

  13. Monte Carlo simulation in statistical physics: an introduction

    NASA Astrophysics Data System (ADS)

    Binder, K., Heermann, D. W.

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.

  14. Domain decomposition methods for a parallel Monte Carlo transport code

    SciTech Connect

    Alme, H J; Rodrigue, G H; Zimmerman, G B

    1999-01-27

    Achieving parallelism in simulations that use Monte Carlo transport methods presents interesting challenges. For problems that require domain decomposition, load balance can be harder to achieve. The Monte Carlo transport package may have to operate with other packages that have different optimal domain decompositions for a given problem. To examine some of these issues, we have developed a code that simulates the interaction of a laser with biological tissue; it uses a Monte Carlo method to simulate the laser and a finite element model to simulate the conduction of the temperature field in the tissue. We will present speedup and load balance results obtained for a suite of problems decomposed using a few domain decomposition algorithms we have developed.

  15. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  16. Phase coexistence in heterogeneous porous media: a new extension to Gibbs ensemble Monte Carlo simulation method.

    PubMed

    Puibasset, Joël

    2005-04-01

    The effect of confinement on phase behavior of simple fluids is still an area of intensive research. In between experiment and theory, molecular simulation is a powerful tool to study the effect of confinement in realistic porous materials, containing some disorder. Previous simulation works aiming at establishing the phase diagram of a confined Lennard-Jones-type fluid, concentrated on simple pore geometries (slits or cylinders). The development of the Gibbs ensemble Monte Carlo technique by Panagiotopoulos [Mol. Phys. 61, 813 (1987)], greatly favored the study of such simple geometries for two reasons. First, the technique is very efficient to calculate the phase diagram, since each run (at a given temperature) converges directly to an equilibrium between a gaslike and a liquidlike phase. Second, due to volume exchange procedure between the two phases, at least one invariant direction of space is required for applicability of this method, which is the case for slits or cylinders. Generally, the introduction of some disorder in such simple pores breaks the initial invariance in one of the space directions and prevents to work in the Gibbs ensemble. The simulation techniques for such disordered systems are numerous (grand canonical Monte Carlo, molecular dynamics, histogram reweighting, N-P-T+test method, Gibbs-Duhem integration procedure, etc.). However, the Gibbs ensemble technique, which gives directly the coexistence between phases, was never generalized to such systems. In this work, we focus on two weakly disordered pores for which a modified Gibbs ensemble Monte Carlo technique can be applied. One of the pores is geometrically undulated, whereas the second is cylindrical but presents a chemical variation which gives rise to a modulation of the wall potential. In the first case almost no change in the phase diagram is observed, whereas in the second strong modifications are reported. PMID:15847492

  17. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    NASA Technical Reports Server (NTRS)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  18. An Extension of Implicit Monte Carlo Diffusion: Multigroup and The Difference Formulation

    SciTech Connect

    Cleveland, M A; Gentile, N; Palmer, T S

    2010-04-19

    Implicit Monte Carlo (IMC) and Implicit Monte Carlo Diffusion (IMD) are approaches to the numerical solution of the equations of radiative transfer. IMD was previously derived and numerically tested on grey, or frequency-integrated problems. In this research, we extend Implicit Monte Carlo Diffusion (IMD) to account for frequency dependence, and we implement the difference formulation as a source manipulation variance reduction technique. We derive the relevant probability distributions and present the frequency dependent IMD algorithm, with and without the difference formulation. The IMD code with and without the difference formulation was tested using both grey and frequency dependent benchmark problems. The Su and Olson semi-analytic Marshak wave benchmark was used to demonstrate the validity of the code for grey problems. The Su and Olson semi-analytic picket fence benchmark was used for the frequency dependent problems. The frequency dependent IMD algorithm reproduces the results of both Su and Olson benchmark problems. Frequency group refinement studies indicate that the computational cost of refining the group structure is likely less than that of group refinement in deterministic solutions of the radiation diffusion methods. Our results show that applying the difference formulation to the IMD algorithm can result in an overall increase in the figure of merit for frequency dependent problems. However, the creation of negatively weighted particles from the difference formulation can cause significant numerical instabilities in regions of the problem with sharp spatial gradients in the solution. An adaptive implementation of the difference formulation may be necessary to focus its use in regions that are at or near thermal equilibrium.

  19. Integrating Art.

    ERIC Educational Resources Information Center

    BCATA Journal for Art Teachers, 1991

    1991-01-01

    These articles focus on art as a component of interdisciplinary integration. (1) "Integrated Curriculum and the Visual Arts" (Anna Kindler) considers various aspects of integration and implications for art education. (2) "Integration: The New Literacy" (Tim Varro) illustrates how the use of technology can facilitate cross-curricular integration.…

  20. Macro Monte Carlo for dose calculation of proton beams

    NASA Astrophysics Data System (ADS)

    Fix, Michael K.; Frei, Daniel; Volken, Werner; Born, Ernst J.; Aebersold, Daniel M.; Manser, Peter

    2013-04-01

    Although the Monte Carlo (MC) method allows accurate dose calculation for proton radiotherapy, its usage is limited due to long computing time. In order to gain efficiency, a new macro MC (MMC) technique for proton dose calculations has been developed. The basic principle of the MMC transport is a local to global MC approach. The local simulations using GEANT4 consist of mono-energetic proton pencil beams impinging perpendicularly on slabs of different thicknesses and different materials (water, air, lung, adipose, muscle, spongiosa, cortical bone). During the local simulation multiple scattering, ionization as well as elastic and inelastic interactions have been taken into account and the physical characteristics such as lateral displacement, direction distributions and energy loss have been scored for primary and secondary particles. The scored data from appropriate slabs is then used for the stepwise transport of the protons in the MMC simulation while calculating the energy loss along the path between entrance and exit position. Additionally, based on local simulations the radiation transport of neutrons and the generated ions are included into the MMC simulations for the dose calculations. In order to validate the MMC transport, calculated dose distributions using the MMC transport and GEANT4 have been compared for different mono-energetic proton pencil beams impinging on different phantoms including homogeneous and inhomogeneous situations as well as on a patient CT scan. The agreement of calculated integral depth dose curves is better than 1% or 1 mm for all pencil beams and phantoms considered. For the dose profiles the agreement is within 1% or 1 mm in all phantoms for all energies and depths. The comparison of the dose distribution calculated using either GEANT4 or MMC in the patient also shows an agreement of within 1% or 1 mm. The efficiency of MMC is up to 200 times higher than for GEANT4. The very good level of agreement in the dose comparisons

  1. PEPSI — a Monte Carlo generator for polarized leptoproduction

    NASA Astrophysics Data System (ADS)

    Mankiewicz, L.; Schäfer, A.; Veltri, M.

    1992-09-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

  2. Bold Diagrammatic Monte Carlo for Fermionic and Fermionized Systems

    NASA Astrophysics Data System (ADS)

    Svistunov, Boris

    2013-03-01

    In three different fermionic cases--repulsive Hubbard model, resonant fermions, and fermionized spins-1/2 (on triangular lattice)--we observe the phenomenon of sign blessing: Feynman diagrammatic series features finite convergence radius despite factorial growth of the number of diagrams with diagram order. Bold diagrammatic Monte Carlo technique allows us to sample millions of skeleton Feynman diagrams. With the universal fermionization trick we can fermionize essentially any (bosonic, spin, mixed, etc.) lattice system. The combination of fermionization and Bold diagrammatic Monte Carlo yields a universal first-principle approach to strongly correlated lattice systems, provided the sign blessing is a generic fermionic phenomenon. Supported by NSF and DARPA

  3. Monte Carlo simulations of phosphate polyhedron connectivity in glasses

    SciTech Connect

    ALAM,TODD M.

    2000-01-01

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  4. Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses

    SciTech Connect

    ALAM,TODD M.

    1999-12-21

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  5. Mesh Optimization for Monte Carlo-Based Optical Tomography

    PubMed Central

    Edmans, Andrew; Intes, Xavier

    2015-01-01

    Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523

  6. Collective translational and rotational Monte Carlo moves for attractive particles

    NASA Astrophysics Data System (ADS)

    Růžička, Štěpán; Allen, Michael P.

    2014-03-01

    Virtual move Monte Carlo is a Monte Carlo (MC) cluster algorithm forming clusters via local energy gradients and approximating the collective kinetic or dynamic motion of attractive colloidal particles. We carefully describe, analyze, and test the algorithm. To formally validate the algorithm through highlighting its symmetries, we present alternative and compact ways of selecting and accepting clusters which illustrate the formal use of abstract concepts in the design of biased MC techniques: the superdetailed balance and the early rejection scheme. A brief and comprehensive summary of the algorithms is presented, which makes them accessible without needing to understand the details of the derivation.

  7. Optix: A Monte Carlo scintillation light transport code

    NASA Astrophysics Data System (ADS)

    Safari, M. J.; Afarideh, H.; Ghal-Eh, N.; Davani, F. Abbasi

    2014-02-01

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements.

  8. Monte Carlo Form-Finding Method for Tensegrity Structures

    NASA Astrophysics Data System (ADS)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  9. Monte Carlo simulation of electrons in dense gases

    NASA Astrophysics Data System (ADS)

    Tattersall, Wade; Boyle, Greg; Cocks, Daniel; Buckman, Stephen; White, Ron

    2014-10-01

    We implement a Monte-Carlo simulation modelling the transport of electrons and positrons in dense gases and liquids, by using a dynamic structure factor that allows us to construct structure-modified effective cross sections. These account for the coherent effects caused by interactions with the relatively dense medium. The dynamic structure factor also allows us to model thermal gases in the same manner, without needing to directly sample the velocities of the neutral particles. We present the results of a series of Monte Carlo simulations that verify and apply this new technique, and make comparisons with macroscopic predictions and Boltzmann equation solutions. Financial support of the Australian Research Council.

  10. New Monte Carlo simulations of the LLNL pulsed-sphere experiments

    SciTech Connect

    Marchetti, A.A., LLNL

    1998-07-01

    From the late 1960s to about 1985, the Pulsed-Sphere Program at Lawrence Livermore National Laboratory (LLNL) was carried out to measure 14-MeV neutron leakage spectra from target spheres made out of various elements, compounds, and mixtures Data from these experiments have been and continue to be fundamental in the evaluation of neutron Monte Carlo transport codes and cross section data libraries In addition, the data provide important integral information for stockpile stewardship, fusion technology, neutron therapy, and other applications Therefore, comparisons between computer Monte Carlo simulations and the results of these experiments are pivotal for the integral testing of processed nuclear data libraries and transport codes Fortunately, a large subset of data from the pulsed-sphere program (some 70 experiments) is available as a computer file called disp93in Furthermore, in the past few years, there has been a remarkable improvement in computer performance that allows for more realistic simulations by Monte Carlo codes such as TART 4 Previous TART simulations of the pulsed-sphere experiments were performed using simplified models with relatively small numbers of histories and very large solid angle detectors to offset the limitations in computer power. Also, not all the TART input files were created with the same level of detail For example, some input files included the air around the sphere while others did not These factors prompted a study to simulate in more detail all of the available pulsed-sphere experiments using the Monte Carlo transport code, TART, and the LLNL evaluated neutron data library, ENDL The timing of this study is significant because many years have passed since those experiments were done, and only a few people who participated in them are still working at LLNL Their help has been essential for an accurate documentation of the experiments For the Stewardship Program it is important to preserve and make use of as much of the data as

  11. Analysis of Correlated Coupling of Monte Carlo Forward and Adjoint Histories

    SciTech Connect

    Ueki, Taro; Hoogenboom, J.E.; Kloosterman, J. L.

    2001-02-15

    In Monte Carlo correlated coupling, forward and adjoint particle histories are initiated in exactly opposite directions at an arbitrarily placed surface between a physical source and a physical detector. It is shown that this coupling calculation can become more efficient than standard forward calculations. In many cases, the basic form of correlated coupling is less efficient than standard forward calculations. This inherent inefficiency can be overcome by applying a black absorber perturbation to either the forward or the adjoint problem and by processing the product of batch averages as one statistical entity. The usage of the black absorber is based on the invariance of the response flow integral with a material perturbation in either the physical detector side volume in the forward problem or the physical source side volume in the adjoint problem. The batch-average product processing makes use of a quadratic increase of the nonzero coupled-score probability. All the developments have been done in such a way that improved efficiency schemes available in widely distributed Monte Carlo codes can be applied to both the forward and adjoint simulations. Also, the physical meaning of the black absorber perturbation is interpreted based on surface crossing and is numerically validated. In addition, the immediate reflection at the intermediate surface with a controlled direction change is investigated within the invariance framework. This approach can be advantageous for a void streaming problem.

  12. Monte Carlo Mean Field Treatment of Microbunching Instability in the FERMI@Elettra First Bunch Compressor

    SciTech Connect

    Bassi, G.; Ellison, J.A.; Heinemann, K.; Warnock, R.; /SLAC

    2009-05-07

    Bunch compressors, designed to increase the peak current, can lead to a microbunching instability with detrimental effects on the beam quality. This is a major concern for free electron lasers (FELs) where very bright electron beams are required, i.e. beams with low emittance and energy spread. In this paper, we apply our self-consistent, parallel solver to study the microbunching instability in the first bunch compressor system of FERMI{at}Elettra. Our basic model is a 2D Vlasov-Maxwell system. We treat the beam evolution through a bunch compressor using our Monte Carlo mean field approximation. We randomly generate N points from an initial phase space density. We then calculate the charge density using a smooth density estimation procedure, from statistics, based on Fourier series. The electric and magnetic fields are calculated from the smooth charge/current density using a novel field formula that avoids singularities by using the retarded time as a variable of integration. The points are then moved forward in small time steps using the beam frame equations of motion, with the fields frozen during a time step, and a new charge density is determined using our density estimation procedure. We try to choose N large enough so that the charge density is a good approximation to the density that would be obtained from solving the 2D Vlasov-Maxwell system exactly. We call this method the Monte Carlo Particle (MCP) method.

  13. GORRAM: Introducing accurate operational-speed radiative transfer Monte Carlo solvers

    NASA Astrophysics Data System (ADS)

    Buras-Schnell, Robert; Schnell, Franziska; Buras, Allan

    2016-06-01

    We present a new approach for solving the radiative transfer equation in horizontally homogeneous atmospheres. The motivation was to develop a fast yet accurate radiative transfer solver to be used in operational retrieval algorithms for next generation meteorological satellites. The core component is the program GORRAM (Generator Of Really Rapid Accurate Monte-Carlo) which generates solvers individually optimized for the intended task. These solvers consist of a Monte Carlo model capable of path recycling and a representative set of photon paths. Latter is generated using the simulated annealing technique. GORRAM automatically takes advantage of limitations on the variability of the atmosphere. Due to this optimization the number of photon paths necessary for accurate results can be reduced by several orders of magnitude. For the shown example of a forward model intended for an aerosol satellite retrieval, comparison with an exact yet slow solver shows that a precision of better than 1% can be achieved with only 36 photons. The computational time is at least an order of magnitude faster than any other type of radiative transfer solver. Merely the lookup table approach often used in satellite retrieval is faster, but on the other hand suffers from limited accuracy. This makes GORRAM-generated solvers an eligible candidate as forward model in operational-speed retrieval algorithms and data assimilation applications. GORRAM also has the potential to create fast solvers of other integrable equations.

  14. A Monte Carlo-based model of gold nanoparticle radiosensitization accounting for increased radiobiological effectiveness.

    PubMed

    Lechtman, E; Mashouf, S; Chattopadhyay, N; Keller, B M; Lai, P; Cai, Z; Reilly, R M; Pignol, J-P

    2013-05-21

    Radiosensitization using gold nanoparticles (AuNPs) has been shown to vary widely with cell line, irradiation energy, AuNP size, concentration and intracellular localization. We developed a Monte Carlo-based AuNP radiosensitization predictive model (ARP), which takes into account the detailed energy deposition at the nano-scale. This model was compared to experimental cell survival and macroscopic dose enhancement predictions. PC-3 prostate cancer cell survival was characterized after irradiation using a 300 kVp photon source with and without AuNPs present in the cell culture media. Detailed Monte Carlo simulations were conducted, producing individual tracks of photoelectric products escaping AuNPs and energy deposition was scored in nano-scale voxels in a model cell nucleus. Cell survival in our predictive model was calculated by integrating the radiation induced lethal event density over the nucleus volume. Experimental AuNP radiosensitization was observed with a sensitizer enhancement ratio (SER) of 1.21 ± 0.13. SERs estimated using the ARP model and the macroscopic enhancement model were 1.20 ± 0.12 and 1.07 ± 0.10 respectively. In the hypothetical case of AuNPs localized within the nucleus, the ARP model predicted a SER of 1.29 ± 0.13, demonstrating the influence of AuNP intracellular localization on radiosensitization.

  15. Kinetic Monte Carlo Simulation of Oxygen and Cation Diffusion in Yttria-Stabilized Zirconia

    NASA Technical Reports Server (NTRS)

    Good, Brian

    2011-01-01

    Yttria-stabilized zirconia (YSZ) is of interest to the aerospace community, notably for its application as a thermal barrier coating for turbine engine components. In such an application, diffusion of both oxygen ions and cations is of concern. Oxygen diffusion can lead to deterioration of a coated part, and often necessitates an environmental barrier coating. Cation diffusion in YSZ is much slower than oxygen diffusion. However, such diffusion is a mechanism by which creep takes place, potentially affecting the mechanical integrity and phase stability of the coating. In other applications, the high oxygen diffusivity of YSZ is useful, and makes the material of interest for use as a solid-state electrolyte in fuel cells. The kinetic Monte Carlo (kMC) method offers a number of advantages compared with the more widely known molecular dynamics simulation method. In particular, kMC is much more efficient for the study of processes, such as diffusion, that involve infrequent events. We describe the results of kinetic Monte Carlo computer simulations of oxygen and cation diffusion in YSZ. Using diffusive energy barriers from ab initio calculations and from the literature, we present results on the temperature dependence of oxygen and cation diffusivity, and on the dependence of the diffusivities on yttria concentration and oxygen sublattice vacancy concentration. We also present results of the effect on diffusivity of oxygen vacancies in the vicinity of the barrier cations that determine the oxygen diffusion energy barriers.

  16. Monte Carlo simulation of molecular flow in a neutral beam injector and comparison with experiment

    SciTech Connect

    Lillie, R.A.; Alsmiller, R.G. Jr.; Gabriel, T.A.; Santoro, R.T.; Schwenterly, S.W.

    1982-04-01

    Monte Carlo calculations have been performed to obtain estimates of the background gas pressure and molecular number density as a function of position in the PDX-prototype neutral beam injector, which has undergone testing at the Oak Ridge National Laboratory. Estimates of these quantities together with the transient and steady-state energy deposition and molecular capture rates on the cryopanels of the cryocondensation pumps and the molecular escape rate from the injector were obtained utilizing a detailed geometric model of the neutral beam injector. The molecular flow calculations were performed using an existing Monte Carlo radiation transport code, which was modified slightly to monitor the energy of the background gas molecules. The credibility of these calculations is demonstrated by the excellent agreement between the calculated and experimentally measured background gas pressure in front of the beamline calorimeter located in the downstream drift region of the injector. The usefulness of the calculational method as a design tool is illustrated by a comparison of the integrated beamline molecular density over the drift region of the injector for three modes of cryopump operation.

  17. Monte Carlo simulation of molecular flow in a neutral-beam injector and comparison with experiment

    SciTech Connect

    Lillie, R.A.; Gabriel, T.A.; Schwenterly, S.W.; Alsmiller, R.G. Jr.; Santoro, R.T.

    1981-09-01

    Monte Carlo calculations have been performed to obtain estimates of the background gas pressure and molecular number density as a function of position in the PDX-prototype neutral beam injector which has undergone testing at the Oak Ridge National Laboratory. Estimates of these quantities together with the transient and steady-state energy deposition and molecular capture rates on the cryopanels of the cryocondensation pumps and the molecular escape rate from the injector were obtained utilizing a detailed geometric model of the neutral beam injector. The molecular flow calculations were performed using an existing Monte Carlo radiation transport code which was modified slightly to monitor the energy of the background gas molecules. The credibility of these calculations is demonstrated by the excellent agreement between the calculated and experimentally measured background gas pressure in front of the beamline calorimeter located in the downstream drift region of the injector. The usefulness of the calculational method as a design tool is illustrated by a comparison of the integrated beamline molecular density over the drift region of the injector for three modes of cryopump operation.

  18. Oversight and Development of a Community Monte Carlo Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Under this grant we have developed a Monte Carlo radiative transfer code that will act as the nucleus for the 13RC Community Monte Carlo Model. All code is written in ANSI-compliant Fortran-95. Many modules define public type and procedures to manipulate them, but do not allow access to the type's internal components. This allows each module to do its own exhaustive error checking up-front, then proceed in a streamlined way. Many modules can read and write the state of their objects to persistent files. The code has been tested on a Macintosh running OS 10.2.4 and the Absoft Fortran compiler, and on Sun UltraSparcs running Solaris 5.8 and Forte V8 compilers. The code exposed bugs in the Intel Fortran Compiler (ifc) on the 13RC Linux host, and we are waiting for a resolution of these bugs before finished the port. the code base is under CVS versions control. The code base consists of the core code (nine modules providing the infrastructure), example integrators, and a suite of utilities and examples.

  19. Wavelet-Monte Carlo Hybrid System for HLW Nuclide Migration Modeling and Sensitivity and Uncertainty Analysis

    SciTech Connect

    Nasif, Hesham; Neyama, Atsushi

    2003-02-26

    This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).

  20. Distributed processor Monte Carlo: MCNP results on a 16-node IBM cluster

    SciTech Connect

    McKinney, G.W.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. Although there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization. Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S(f, P) = 1/(1 [minus] f + f /P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.

  1. Distributed processor Monte Carlo: MCNP results on a 16-node IBM cluster

    SciTech Connect

    McKinney, G.W.

    1993-05-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. Although there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization. Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S(f, P) = 1/(1 {minus} f + f /P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.

  2. Lattice gauge theories and Monte Carlo algorithms

    SciTech Connect

    Creutz, M.

    1988-10-01

    Lattice gauge theory has become the primary tool for non-perturbative calculations in quantum field theory. These lectures review some of the foundations of this subject. The first lecture reviews the basic definition of the theory in terms of invariant integrals over group elements on lattice bonds. The lattice represents an ultraviolet cutoff, and renormalization group arguments show how the bare coupling must be varied to obtain the continuum limit. Expansions in the inverse of the coupling constant demonstrate quark confinement in the strong coupling limit. The second lecture turns to numerical simulation, which has become an important approach to calculating hadronic properties. Here I discuss the basic algorithms for obtaining appropriately weighted gauge field configurations. The third lecture turns to algorithms for treating fermionic fields, which still require considerably more computer time than needed for purely bosonic simulations. Some particularly promising recent approaches are based on global accept-reject steps and should display a rather favorable dependence of computer time on the system volume. 34 refs.

  3. Observations on variational and projector Monte Carlo methods.

    PubMed

    Umrigar, C J

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed. PMID:26520496

  4. Observations on variational and projector Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Umrigar, C. J.

    2015-10-01

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  5. Reagents for Electrophilic Amination: A Quantum Monte CarloStudy

    SciTech Connect

    Amador-Bedolla, Carlos; Salomon-Ferrer, Romelia; Lester Jr.,William A.; Vazquez-Martinez, Jose A.; Aspuru-Guzik, Alan

    2006-11-01

    Electroamination is an appealing synthetic strategy toconstruct carbon-nitrogen bonds. We explore the use of the quantum MonteCarlo method and a proposed variant of the electron-pair localizationfunction--the electron-pair localization function density--as a measureof the nucleophilicity of nitrogen lone-pairs as a possible screeningprocedure for electrophilic reagents.

  6. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  7. Monte Carlo study of TLD measurements in air cavities.

    PubMed

    Haraldsson, Pia; Knöös, Tommy; Nyström, Håkan; Engström, Per

    2003-09-21

    Thermoluminescent dosimeters (TLDs) are used for verification of the delivered dose during IMRT treatment of head and neck carcinomas. The TLDs are put into a plastic tube, which is placed in the nasal cavities through the treated volume. In this study, the dose distribution to a phantom having a cylindrical air cavity containing a tube was calculated by Monte Carlo methods and the results were compared with data from a treatment planning system (TPS) to evaluate the accuracy of the TLD measurements. The phantom was defined in the DOSXYZnrc Monte Carlo code and calculations were performed with 6 MV fields, with the TLD tube placed at different positions within the cylindrical air cavity. A similar phantom was defined in the pencil beam based TPS. Differences between the Monte Carlo and the TPS calculations of the absorbed dose to the TLD tube were found to be small for an open symmetrical field. For a half-beam field through the air cavity, there was a larger discrepancy. Furthermore, dose profiles through the cylindrical air cavity show, as expected, that the treatment planning system overestimates the absorbed dose in the air cavity. This study shows that when using an open symmetrical field, Monte Carlo calculations of absorbed doses to a TLD tube in a cylindrical air cavity give results comparable to a pencil beam based treatment planning system.

  8. Calibration and Monte Carlo modelling of neutron long counters

    NASA Astrophysics Data System (ADS)

    Tagziria, Hamid; Thomas, David J.

    2000-10-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivity of the Monte Carlo calculations for the efficiency of the De Pangher long counter to perturbations in density and cross-section of the polyethylene used in the construction has been investigated.

  9. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    SciTech Connect

    Koning, A.J.

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  10. Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm

    NASA Astrophysics Data System (ADS)

    Gubernatis, James

    2014-03-01

    A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.

  11. Monte Carlo shipping cask calculations using an automated biasing procedure

    SciTech Connect

    Tang, J.S.; Hoffman, T.J.; Childs, R.L.; Parks, C.V.

    1983-01-01

    This paper describes an automated biasing procedure for Monte Carlo shipping cask calculations within the SCALE system - a modular code system for Standardized Computer Analysis for Licensing Evaluation. The SCALE system was conceived and funded by the US Nuclear Regulatory Commission to satisfy a strong need for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems.

  12. A Variational Monte Carlo Approach to Atomic Structure

    ERIC Educational Resources Information Center

    Davis, Stephen L.

    2007-01-01

    The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.

  13. Monte Carlo method for magnetic impurities in metals

    NASA Technical Reports Server (NTRS)

    Hirsch, J. E.; Fye, R. M.

    1986-01-01

    The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.

  14. Microbial contamination in poultry chillers estimated by Monte Carlo simulations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The risk of microbial contamination during poultry processing may be reduced by the operating characteristics of the chiller. The performance of air chillers and immersion chillers were compared in terms of pre-chill and post-chill contamination using Monte Carlo simulations. Three parameters were u...

  15. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  16. Diffuse photon density wave measurements and Monte Carlo simulations.

    PubMed

    Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  17. Observations on variational and projector Monte Carlo methods

    SciTech Connect

    Umrigar, C. J.

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  18. Monte Carlo Simulations of Light Propagation in Apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...

  19. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  20. 75 FR 53332 - San Carlos Irrigation Project, Arizona

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... additional avenues for water conservation. The proposed action includes the reconstruction and lining of... SCIDD and the Central Arizona Water Conservation District to allow delivery of CAP water to... of San Carlos Irrigation Project (SCIP) water delivery facilities near the communities of Casa...

  1. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  2. Monte Carlo radiation transport: A revolution in science

    SciTech Connect

    Hendricks, J.

    1993-04-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.

  3. Monte Carlo event generators for hadron-hadron collisions

    SciTech Connect

    Knowles, I.G.; Protopopescu, S.D.

    1993-06-01

    A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.

  4. Monte Carlo capabilities of the SCALE code system

    SciTech Connect

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  5. Monte Carlo capabilities of the SCALE code system

    DOE PAGESBeta

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; et al

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  6. A Conversation with Native Flutist R. Carlos Nakai.

    ERIC Educational Resources Information Center

    Simonelli, Richard

    1992-01-01

    R. Carlos Nakai discusses his personal development as a musician, his interest in keeping the Native flute tradition alive in a modern way, his ethic of service, the purpose of higher education for Indian students, the relation of education to life, and the role of Indian people in the sciences. (SV)

  7. Parallel Monte Carlo simulation of multilattice thin film growth

    NASA Astrophysics Data System (ADS)

    Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen

    2001-07-01

    This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.

  8. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  9. Bayesian methods, maximum entropy, and quantum Monte Carlo

    SciTech Connect

    Gubernatis, J.E.; Silver, R.N. ); Jarrell, M. )

    1991-01-01

    We heuristically discuss the application of the method of maximum entropy to the extraction of dynamical information from imaginary-time, quantum Monte Carlo data. The discussion emphasizes the utility of a Bayesian approach to statistical inference and the importance of statistically well-characterized data. 14 refs.

  10. Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Silver, N. Clayton; Hittner, James B.; May, Kim

    2004-01-01

    The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…

  11. A Monte Carlo solution of heat conduction and Poisson equations

    SciTech Connect

    Grigoriu, M.

    2000-02-01

    A Monte Carlo method is developed for solving the heat conduction, Poisson, and Laplace equations. The method is based on properties of Brownian motion and Ito processes, the Ito formula for differentiable functions of these processes, and the similarities between the generator of Ito processes and the differential operators of these equations. The proposed method is similar to current Monte Carlo solutions, such as the fixed random walk, exodus, and floating walk methods, in the sense that it is local, that is, it determines the solution at a single point or a small set of points of the domain of definition of the heat conduction equation directly. However, the proposed and the current Monte Carlo solutions are based on different theoretical considerations. The proposed Monte Carlo method has some attractive features. The method does not require to discretize the domain of definition of the differential equation, can be applied to domains of any dimension and geometry, works for both Dirichlet and Neumann boundary conditions, and provides simple solutions for the steady-state and transient heat equations. Several examples are presented to illustrate the application of the proposed method and demonstrate its accuracy.

  12. Does standard Monte Carlo give justice to instantons?

    NASA Astrophysics Data System (ADS)

    Fucito, F.; Solomon, S.

    1984-01-01

    The results of the standard local Monte Carlo are changed by offering instantons as candidates in the Metropolis procedure. We also define an O(3) topological charge with no contribution from planar dislocations. The RG behavior is still not recovered. Bantrell Fellow in Theoretical Physics.

  13. Using MathCad to Evaluate Exact Integral Formulations of Spacecraft Orbital Heats for Primitive Surfaces at Any Orientation

    NASA Technical Reports Server (NTRS)

    Pinckney, John

    2010-01-01

    With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.

  14. Monte Carlo implementation of Schiff's approximation for estimating radiative properties of homogeneous, simple-shaped and optically soft particles: Application to photosynthetic micro-organisms

    NASA Astrophysics Data System (ADS)

    Charon, Julien; Blanco, Stéphane; Cornet, Jean-François; Dauchet, Jérémi; El Hafi, Mouna; Fournier, Richard; Abboud, Mira Kaissar; Weitz, Sebastian

    2016-03-01

    In the present paper, Schiff's approximation is applied to the study of light scattering by large and optically-soft axisymmetric particles, with special attention to cylindrical and spheroidal photosynthetic micro-organisms. This approximation is similar to the anomalous diffraction approximation but includes a description of phase functions. Resulting formulations for the radiative properties are multidimensional integrals, the numerical resolution of which requires close attention. It is here argued that strong benefits can be expected from a statistical resolution by the Monte Carlo method. But designing such efficient Monte Carlo algorithms requires the development of non-standard algorithmic tricks using careful mathematical analysis of the integral formulations: the codes that we develop (and make available) include an original treatment of the nonlinearity in the differential scattering cross-section (squared modulus of the scattering amplitude) thanks to a double sampling procedure. This approach makes it possible to take advantage of recent methodological advances in the field of Monte Carlo methods, illustrated here by the estimation of sensitivities to parameters. Comparison with reference solutions provided by the T-Matrix method is presented whenever possible. Required geometric calculations are closely similar to those used in standard Monte Carlo codes for geometric optics by the computer-graphics community, i.e. calculation of intersections between rays and surfaces, which opens interesting perspectives for the treatment of particles with complex shapes.

  15. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  16. Accurate characterization of Monte Carlo calculated electron beams for radiotherapy.

    PubMed

    Ma, C M; Faddegon, B A; Rogers, D W; Mackie, T R

    1997-03-01

    Monte Carlo studies of dose distributions in patients treated with radiotherapy electron beams would benefit from generalized models of clinical beams if such models introduce little error into the dose calculations. Methodology is presented for the design of beam models, including their evaluation in terms of how well they preserve the character of the clinical beam, and the effect of the beam models on the accuracy of dose distributions calculated with Monte Carlo. This methodology has been used to design beam models for electron beams from two linear accelerators, with either a scanned beam or a scattered beam. Monte Carlo simulations of the accelerator heads are done in which a record is kept of the particle phase-space, including the charge, energy, direction, and position of every particle that emerges from the treatment head, along with a tag regarding the details of the particle history. The character of the simulated beams are studied in detail and used to design various beam models from a simple point source to a sophisticated multiple-source model which treats particles from different parts of a linear accelerator as from different sub-sources. Dose distributions calculated using both the phase-space data and the multiple-source model agree within 2%, demonstrating that the model is adequate for the purpose of Monte Carlo treatment planning for the beams studied. Benefits of the beam models over phase-space data for dose calculation are shown to include shorter computation time in the treatment head simulation and a smaller disk space requirement, both of which impact on the clinical utility of Monte Carlo treatment planning.

  17. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  18. A self-consistent method for the generation of configuration interaction coefficients using variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Riley, Kevin E.; Anderson, James B.

    We have developed a new method for calculating configuration interaction coefficients for trial wavefunctions used in quantum Monte Carlo calculations of molecular structure. These numerical calculations can be carried out with optimized Jastrow functions included in the wavefunction. These calculations produce coefficients different from those obtained through methods using analytical integration without the Jastrow functions and lead to more accurate trial wavefunctions. We tested the method on the beryllium atom and found that the VMC energy obtained with improved coefficients (-14.6615 hartrees) was 0.9 millihartrees lower than the energy obtained using coefficients from analytical calculations (-14.6606 hartrees). This energy difference corresponds to about 1% of the correlation energy.

  19. Spheroid Formation of Hepatocarcinoma Cells in Microwells: Experiments and Monte Carlo Simulations

    PubMed Central

    Tabaei, Seyed R.; Park, Jae Hyeok; Na, Kyuhwan; Chung, Seok; Zhdanov, Vladimir P.

    2016-01-01

    The formation of spherical aggregates during the growth of cell population has long been observed under various conditions. We observed the formation of such aggregates during proliferation of Huh-7.5 cells, a human hepatocarcinoma cell line, in a microfabricated low-adhesion microwell system (SpheroFilm; formed of mass-producible silicone elastomer) on the length scales up to 500 μm. The cell proliferation was also tracked with immunofluorescence staining of F-actin and cell proliferation marker Ki-67. Meanwhile, our complementary 3D Monte Carlo simulations, taking cell diffusion and division, cell-cell and cell-scaffold adhesion, and gravity into account, illustrate the role of these factors in the formation of spheroids. Taken together, our experimental and simulation results provide an integrative view of the process of spheroid formation for Huh-7.5 cells. PMID:27571565

  20. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    SciTech Connect

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)

  1. Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo

    DOE PAGESBeta

    Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.

    2014-10-01

    We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, amore » finding in stark contrast to DAC data.« less

  2. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana

    2016-01-01

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.

  3. Spheroid Formation of Hepatocarcinoma Cells in Microwells: Experiments and Monte Carlo Simulations.

    PubMed

    Wang, Yan; Kim, Myung Hee; Tabaei, Seyed R; Park, Jae Hyeok; Na, Kyuhwan; Chung, Seok; Zhdanov, Vladimir P; Cho, Nam-Joon

    2016-01-01

    The formation of spherical aggregates during the growth of cell population has long been observed under various conditions. We observed the formation of such aggregates during proliferation of Huh-7.5 cells, a human hepatocarcinoma cell line, in a microfabricated low-adhesion microwell system (SpheroFilm; formed of mass-producible silicone elastomer) on the length scales up to 500 μm. The cell proliferation was also tracked with immunofluorescence staining of F-actin and cell proliferation marker Ki-67. Meanwhile, our complementary 3D Monte Carlo simulations, taking cell diffusion and division, cell-cell and cell-scaffold adhesion, and gravity into account, illustrate the role of these factors in the formation of spheroids. Taken together, our experimental and simulation results provide an integrative view of the process of spheroid formation for Huh-7.5 cells. PMID:27571565

  4. Quantum dynamics at finite temperature: Time-dependent quantum Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Christov, Ivan P.

    2016-08-01

    In this work we investigate the ground state and the dissipative quantum dynamics of interacting charged particles in an external potential at finite temperature. The recently devised time-dependent quantum Monte Carlo (TDQMC) method allows a self-consistent treatment of the system of particles together with bath oscillators first for imaginary-time propagation of Schrödinger type of equations where both the system and the bath converge to their finite temperature ground state, and next for real time calculation where the dissipative dynamics is demonstrated. In that context the application of TDQMC appears as promising alternative to the path-integral related techniques where the real time propagation can be a challenge.

  5. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  6. Topics in structural dynamics: Nonlinear unsteady transonic flows and Monte Carlo methods in acoustics

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.

    1974-01-01

    The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.

  7. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  8. Speeding Up and Quantifying Approximation Error in Continuum Quantum Monte Carlo Solid-State Calculations

    NASA Astrophysics Data System (ADS)

    Parker, William David

    Quantum theory has successfully explained the mechanics of much of the microscopic world. However, Schrodinger's equations are difficult to solve for many-particle systems. Mean-field theories such as Hartree-Fock and density functional theory account for much of the total energy of electronic systems but fail on the crucial correlation energy that predicts solid cohesion and material properties. Monte Carlo methods solve differential and integral equations with error independent of the number of dimensions in the problem. Variational Monte Carlo (VMC) applies the variational principle to optimize the wave function used in the Monte Carlo integration of Schrodinger's time-independent equation. Diffusion Monte Carlo (DMC) represents the wave function by electron configurations diffusing stochastically in imaginary time to the ground state. Approximations in VMC and DMC make the problem tractable but introduce error in parameter-controlled and uncontrolled ways. The many-electron wave function consists of single-particle orbitals. The orbitals are combined in a functional form to account for electron exchange and correlation. Plane waves are a convenient basis for the orbitals. However, plane-wave orbitals grow in evaluation cost with basis-set completeness and system size. To speed up the calculation, polynomials approximate the plane-wave sum. Four polynomial methods tested are: Lagrange interpolation, pp-spline interpolation, B-spline interpolation and B-spline approximation. The polynomials all increase speed by an order of the number of particles. B-spline approximation most consistently maintains accuracy in the seven systems tested. However, polynomials increase the memory needed by a factor of two to eight. B-spline approximation with a separate approximation for the Laplacian of the orbitals increases the memory by a factor of four over plane waves. Polynomial-based orbitals enable larger calculations and careful examination of error introduced by

  9. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the

  10. Quantitative Molecular Thermochemistry Based on Path Integrals

    SciTech Connect

    Glaesemann, K R; Fried, L E

    2005-03-14

    The calculation of thermochemical data requires accurate molecular energies and heat capacities. Traditional methods rely upon the standard harmonic normal mode analysis to calculate the vibrational and rotational contributions. We utilize path integral Monte Carlo (PIMC) for going beyond the harmonic analysis, to calculate the vibrational and rotational contributions to ab initio energies. This is an application and extension of a method previously developed in our group.

  11. Verification of SMART Neutronics Design Methodology by the MCNAP Monte Carlo Code

    SciTech Connect

    Jong Sung Chung; Kyung Jin Shim; Chang Hyo Kim; Chungchan Lee; Sung Quun Zee

    2000-11-12

    SMART is a small advanced integral pressurized water reactor (PWR) of 330 MW(thermal) designed for both electricity generation and seawater desalinization. The CASMO-3/MASTER nuclear analysis system, a design-basis of Korean PWR plants, has been employed for the SMART core nuclear design and analysis because the fuel assembly (FA) characteristics and reactor operating conditions in temperature and pressure are similar to those of PWR plants. However, the SMART FAs are highly poisoned with more than 20 Al{sub 2}O{sub 3}-B{sub 4}C plus additional Gd{sub 2}O{sub 3}/UO{sub 2} BPRs each FA. The reactor is operated with control rods inserted. Therefore, the flux and power distribution may become more distorted than those of commercial PWR plants. In addition, SMART should produce power from room temperature to hot-power operating condition because it employs nuclear heating from room temperature. This demands reliable predictions of core criticality, shutdown margin, control rod worth, power distributions, and reactivity coefficients at both room temperature and hot operating condition, yet no such data are available to verify the CASMO-3/MASTER (hereafter MASTER) code system. In the absence of experimental verification data for the SMART neutronics design, the Monte Carlo depletion analysis program MCNAP is adopted as near-term alternatives for qualifying MASTER neutronics design calculations. The MCNAP is a personal computer-based continuous energy Monte Carlo neutronics analysis program written in C++ language. We established its qualification by presenting its prediction accuracy on measurements of Venus critical facilities and core neutronics analysis of a PWR plant in operation, and depletion characteristics of integral burnable absorber FAs of the current PWR. Here, we present a comparison of MASTER and MCNAP neutronics design calculations for SMART and establish the qualification of the MASTER system.

  12. Monte Carlo uncertainty assessment of ultrasonic beam parameters from immersion transducers used to non-destructive testing.

    PubMed

    Alvarenga, A V; Silva, C E R; Costa-Félix, R P B

    2016-07-01

    The uncertainty of ultrasonic beam parameters from non-destructive testing immersion probes was evaluated using the Guide to the expression of uncertainty in measurement (GUM) uncertainty framework and Monte Carlo Method simulation. The calculated parameters such as focal distance, focal length, focal widths and beam divergence were determined according to EN 12668-2. The typical system configuration used during the mapping acquisition comprises a personal computer connected to an oscilloscope, a signal generator, axes movement controllers, and a water bath. The positioning system allows moving the transducer (or hydrophone) in the water bath. To integrate all system components, a program was developed to allow controlling all the axes, acquire waterborne signals, and calculate essential parameters to assess and calibrate US transducers. All parameters were calculated directly from the raster scans of axial and transversal beam profiles, except beam divergence. Hence, the positioning system resolution and the step size are principal source of uncertainty. Monte Carlo Method simulations were performed by another program that generates pseudo-random samples for the distributions of the involved quantities. In all cases, there were found statistical differences between Monte Carlo and GUM methods.

  13. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems

    SciTech Connect

    Somasundaram, E.; Palmer, T. S.

    2013-07-01

    In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)

  14. An Investigation of the Performance of the Unified Monte Carlo Method of Neutron Cross Section Data Evaluation

    SciTech Connect

    Capote, Roberto Smith, Donald L.

    2008-12-15

    The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered.

  15. Parallel Monte Carlo Particle Transport and the Quality of Random Number Generators: How Good is Good Enough?

    SciTech Connect

    Procassini, R J; Beck, B R

    2004-12-07

    It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results.

  16. Improving dynamical lattice QCD simulations through integrator tuning using Poisson brackets and a force-gradient integrator

    SciTech Connect

    Clark, M. A.; Joo, Balint; Kennedy, A. D.; Silva, P. J.

    2011-10-01

    We show how the integrators used for the molecular dynamics step of the Hybrid Monte Carlo algorithm can be further improved. These integrators not only approximately conserve some Hamiltonian H but conserve exactly a nearby shadow Hamiltonian H-tilde. This property allows for a new tuning method of the molecular dynamics integrator and also allows for a new class of integrators (force-gradient integrators) which is expected to reduce significantly the computational cost of future large-scale gauge field ensemble generation.

  17. On the forward-backward-in-time approach for Monte Carlo solution of Parker's transport equation: One-dimensional case

    NASA Astrophysics Data System (ADS)

    Bobik, P.; Boschini, M. J.; Della Torre, S.; Gervasi, M.; Grandi, D.; La Vacca, G.; Pensotti, S.; Putis, M.; Rancoita, P. G.; Rozza, D.; Tacconi, M.; Zannoni, M.

    2016-05-01

    The cosmic rays propagation inside the heliosphere is well described by a transport equation introduced by Parker in 1965. To solve this equation, several approaches were followed in the past. Recently, a Monte Carlo approach became widely used in force of its advantages with respect to other numerical methods. In this approach the transport equation is associated to a fully equivalent set of stochastic differential equations (SDE). This set is used to describe the stochastic path of quasi-particle from a source, e.g., the interstellar space, to a specific target, e.g., a detector at Earth. We present a comparison of forward-in-time and backward-in-time methods to solve the cosmic rays transport equation in the heliosphere. The Parker equation and the related set of SDE in the several formulations are treated in this paper. For the sake of clarity, this work is focused on the one-dimensional solutions. Results were compared with an alternative numerical solution, namely, Crank-Nicolson method, specifically developed for the case under study. The methods presented are fully consistent each others for energy greater than 400 MeV. The comparison between stochastic integrations and Crank-Nicolson allows us to estimate the systematic uncertainties of Monte Carlo methods. The forward-in-time stochastic integrations method showed a systematic uncertainty <5%, while backward-in-time stochastic integrations method showed a systematic uncertainty <1% in the studied energy range.

  18. Fast Perturbation Monte Carlo simulation for heterogeneous medium and its utilization in functional near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Song, Y. M.; Li, J. W.; Cai, F. H.

    2016-01-01

    In near-infrared spectroscopy, fiber optic probe is usually applied to incident light into the bio-sample and detect the spatial and temporal resolved optical signal re-emitted from the turbid medium. In this point-source-point-detector measurement system, seed Perturbation Monte Carlo (Pmc) method is an effective model to perform the forward simulation. In our study, the integration of parallel computing with graphics processing units(GPU) into the existing seed Pmc method substantially accelerate the speed of the original simulation. The GPU based seed Pmc provide an excellent solution for the application of fiber optic probe in both homogeneous a heterogeneous turbid medium.

  19. Variational Monte Carlo study of a chiral spin liquid in the extended Heisenberg model on the kagome lattice

    NASA Astrophysics Data System (ADS)

    Hu, Wen-Jun; Zhu, Wei; Zhang, Yi; Gong, Shoushu; Becca, Federico; Sheng, D. N.

    2015-01-01

    We investigate the extended Heisenberg model on the kagome lattice by using Gutzwiller projected fermionic states and the variational Monte Carlo technique. In particular, when both second- and third-neighbor superexchanges are considered, we find that a gapped spin liquid described by nontrivial magnetic fluxes and long-range chiral-chiral correlations is energetically favored compared to the gapless U(1) Dirac state. Furthermore, the topological Chern number, obtained by integrating the Berry curvature, and the degeneracy of the ground state, by constructing linearly independent states, lead us to identify this flux state as the chiral spin liquid with a C =1 /2 fractionalized Chern number.

  20. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    NASA Astrophysics Data System (ADS)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-01

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  1. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  2. Monte Carlo modeling of ultrasound probes for image guided radiotherapy

    SciTech Connect

    Bazalova-Carter, Magdalena; Schlosser, Jeffrey; Chen, Josephine; Hristov, Dimitre

    2015-10-15

    Purpose: To build Monte Carlo (MC) models of two ultrasound (US) probes and to quantify the effect of beam attenuation due to the US probes for radiation therapy delivered under real-time US image guidance. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their megavoltage (MV) CT images acquired in a Tomotherapy machine with a 3.5 MV beam in the EGSnrc, BEAMnrc, and DOSXYZnrc codes. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2 and 8.0 g/cm{sup 3}. Beam attenuation due to the US probes in horizontal (for both probes) and vertical (for the X6-1 probe) orientation was measured in a solid water phantom for 6 and 15 MV (15 × 15) cm{sup 2} beams with a 2D ionization chamber array and radiographic films at 5 cm depth. The MC models of the US probes were validated by comparison of the measured dose distributions and dose distributions predicted by MC. Attenuation of depth dose in the (15 × 15) cm{sup 2} beams and small circular beams due to the presence of the probes was assessed by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities in the X6-1 and C5-2 probes were found to be 4.8 and 5.2 g/cm{sup 3}, respectively. Dose profile differences between MC simulations and measurements of less than 3% for US probes in horizontal orientation were found, with the exception of the penumbra region. The largest 6% dose difference was observed in dose profiles of the X6-1 probe placed in vertical orientation, which was attributed to inadequate modeling of the probe cable. Gamma analysis of the simulated and measured doses showed that over 96% of measurement points passed the 3%/3 mm criteria for both probes placed in horizontal orientation and for the X6-1 probe in vertical orientation. The

  3. Monte Carlo modeling of ultrasound probes for image guided radiotherapy

    PubMed Central

    Bazalova-Carter, Magdalena; Schlosser, Jeffrey; Chen, Josephine; Hristov, Dimitre

    2015-01-01

    Purpose: To build Monte Carlo (MC) models of two ultrasound (US) probes and to quantify the effect of beam attenuation due to the US probes for radiation therapy delivered under real-time US image guidance. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their megavoltage (MV) CT images acquired in a Tomotherapy machine with a 3.5 MV beam in the EGSnrc, BEAMnrc, and DOSXYZnrc codes. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2 and 8.0 g/cm3. Beam attenuation due to the US probes in horizontal (for both probes) and vertical (for the X6-1 probe) orientation was measured in a solid water phantom for 6 and 15 MV (15 × 15) cm2 beams with a 2D ionization chamber array and radiographic films at 5 cm depth. The MC models of the US probes were validated by comparison of the measured dose distributions and dose distributions predicted by MC. Attenuation of depth dose in the (15 × 15) cm2 beams and small circular beams due to the presence of the probes was assessed by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R2 > 0.99. The maximum mass densities in the X6-1 and C5-2 probes were found to be 4.8 and 5.2 g/cm3, respectively. Dose profile differences between MC simulations and measurements of less than 3% for US probes in horizontal orientation were found, with the exception of the penumbra region. The largest 6% dose difference was observed in dose profiles of the X6-1 probe placed in vertical orientation, which was attributed to inadequate modeling of the probe cable. Gamma analysis of the simulated and measured doses showed that over 96% of measurement points passed the 3%/3 mm criteria for both probes placed in horizontal orientation and for the X6-1 probe in vertical orientation. The X6-1 probe in vertical

  4. Efficient, Automated Monte Carlo Methods for Radiation Transport.

    PubMed

    Kong, Rong; Ambrose, Martin; Spanier, Jerome

    2008-11-20

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

  5. Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.

    PubMed

    Leigh, Jessica W; Bryant, David

    2015-09-01

    Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology. PMID:26012871

  6. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    DOE PAGESBeta

    Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C.

    2016-05-10

    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less

  7. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  8. Mesh-based weight window approach for Monte Carlo simulation

    SciTech Connect

    Liu, L.; Gardner, R.P.

    1997-12-01

    The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback.

  9. Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.

    PubMed

    Leigh, Jessica W; Bryant, David

    2015-09-01

    Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.

  10. Estimation of beryllium ground state energy by Monte Carlo simulation

    SciTech Connect

    Kabir, K. M. Ariful; Halder, Amal

    2015-05-15

    Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.

  11. Visibility assessment : Monte Carlo characterization of temporal variability.

    SciTech Connect

    Laulainen, N.; Shannon, J.; Trexler, E. C., Jr.

    1997-12-12

    Current techniques for assessing the benefits of certain anthropogenic emission reductions are largely influenced by limitations in emissions data and atmospheric modeling capability and by the highly variant nature of meteorology. These data and modeling limitations are likely to continue for the foreseeable future, during which time important strategic decisions need to be made. Statistical atmospheric quality data and apportionment techniques are used in Monte-Carlo models to offset serious shortfalls in emissions, entrainment, topography, statistical meteorology data and atmospheric modeling. This paper describes the evolution of Department of Energy (DOE) Monte-Carlo based assessment models and the development of statistical inputs. A companion paper describes techniques which are used to develop the apportionment factors used in the assessment models.

  12. Bayesian Monte Carlo method for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  13. Nuclear pairing within a configuration-space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Lingle, Mark; Volya, Alexander

    2015-06-01

    Pairing correlations in nuclei play a decisive role in determining nuclear drip lines, binding energies, and many collective properties. In this work a new configuration-space Monte Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with nonconstant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.

  14. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  15. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  16. Monte Carlo Methods in ICF (LIRPP Vol. 13)

    NASA Astrophysics Data System (ADS)

    Zimmerman, George B.

    2016-10-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  17. Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography

    SciTech Connect

    Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.

    2000-02-01

    The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.

  18. Monte Carlo Simulations on a 9-node PC Cluster

    NASA Astrophysics Data System (ADS)

    Gouriou, J.

    Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated.

  19. Analytical band Monte Carlo analysis of electron transport in silicene

    NASA Astrophysics Data System (ADS)

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V-1 s-1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ˜100 cm2 V-1 s-1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  20. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.