Note: This page contains sample records for the topic monte carlo stochastic from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results. Last update: November 12, 2013.

Quantum MonteCarlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum MonteCarlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical MonteCarlo within the overall quantum MonteCarlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum MonteCarlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum MonteCarlo techniques like variational MonteCarlo (VMC) or diffusion MonteCarlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.

A method of stochastically modeling groundwater traveltimes is described. This method is based on a MonteCarlo technique, which is used to generate a suite of random spatial fields. These fields are subsequently input to the groundwater flow and groundwater traveltime equations. Uncertain inputs to these equations can be: (1) transmissivity (or hydraulic conductivity); (2) effective thickness (or effective porosity);

A multilevel MonteCarlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

In the recently proposed full configuration interaction quantum MonteCarlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion MonteCarlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum MonteCarlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).

Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.

This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy MonteCarlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

Liang, C.; Ji, W. [Dept. of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Inst., 110 8th street, Troy, NY (United States)

A model of charged particles in turbulent clumps has been investigated using MonteCarlo simulations. In the white noise case, the results of the present simulations on diffusion confirm the scaling property proposed by one of the authors (M.S.). In the colored noise case, an efficient method to generate correlated fields is presented, and thereby the role of colored noise

Tissue-weighting factors used in the calculation of the effective dose have undergone revision in the light of new data from the atomic bomb survivors. A MonteCarlo simulation was designed to evaluate the magnitude of stochastic errors in the derived factors. Results demonstrate substantial variability in the suggested factors. 19 refs., 2 figs., 4 tabs.

Leslie, W.D. [St. Boniface General Hospital, Winnipeg (Canada)

A method employing decomposition techniques and MonteCarlo sampling (importance sampling) to solve stochastic linear programs is described and applied to capacity-expansion planning problems of electric utilities. The author considers uncertain availability of generators and transmission lines and uncertain demand. Numerical results are presented.

We study the applicability of Proper Orthogonal Decomposition (POD) techniques to reduce the computational burden associated with MonteCarlo (MC) iterations, which are typically used in the solution of the stochastic groundwater flow problem. We consider a two-dimensional saturated flow scenario, depicting steady-state groundwater flow around a pumping well within an aquifer characterized by deterministic distributions of transmissivity and boundary

The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid MonteCarlo (HMC) algorithm which is superior to other Markov Chain MonteCarlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.

In this paper, we explore the impact of several sources of uncertainties on the assessment of energy and climate policies\\u000a when one uses in a harmonized way stochastic programming in a large-scale bottom-up (BU) model and MonteCarlo simulation\\u000a in a large-scale top-down (TD) model. The BU model we use is the TIMES Integrated Assessment Model, which is run in

Frédéric Babonneau; Alain Haurie; Richard Loulou; Marc Vielle

Two MonteCarlo algorithms originally proposed by Zimmerman and Zimmerman and Adams for particle transport through a binary stochastic mixture are numerically compared using a standard set of planar geometry benchmark problems. In addition to previously-published comparisons of the ensemble-averaged probabilities of reflection and transmission, we include comparisons of detailed ensemble-averaged total and material scalar flux distributions. Because not all benchmark scalar flux distribution data used to produce plots in previous publications remains available, we have independently regenerated the benchmark solutions including scalar flux distributions. Both MonteCarlo transport algorithms robustly produce physically-realistic scalar flux distributions for the transport problems examined. The first algorithm reproduces the standard Levermore-Pomraning model results for the probabilities of reflection and transmission. The second algorithm generally produces significantly more accurate probabilities of reflection and transmission and also significantly more accurate total and material scalar flux distributions.

The evolution of two-dimensional drop distributions is simulated in this study using a MonteCarlo method.~The stochastic algorithm of Gillespie (1976) for chemical reactions in the formulation proposed by Laurenzi et al. (2002) was used to simulate the kinetic behavior of the drop population. Within this framework species are defined as droplets of specific size and aerosol composition. The performance of the algorithm was checked by comparing the numerical with the analytical solutions found by Lushnikov (1975). Very good agreement was observed between the MonteCarlo simulations and the analytical solution. Simulation results are presented for bi-variate constant and hydrodynamic kernels. The algorithm can be easily extended to incorporate various properties of clouds such as including several crystal habits, different types of soluble CCN, particle charging and drop breakup.

Analytical solutions of nonlinear and higher-dimensional stochastically driven oscillators are rarely possible and this leaves the direct MonteCarlo simulation of the governing stochastic differential equations (SDEs) as the only tool to obtain the required numerical solution. Engineers, in particular, are mostly interested in weak numerical solutions, which provide a faster and simpler computational framework to obtain the statistical expectations (moments) of the response functions. The numerical integration tools considered in this study are weak versions of stochastic Euler and stochastic Newmark methods. A well-known limitation of a MonteCarlo approach is however the requirement of a large ensemble size in order to arrive at convergent estimates of the statistical quantities of interest. Presently, a simple form of a variance reduction strategy is proposed such that the ensemble size may be significantly reduced without affecting the accuracy of the predicted expectations of any function of the response vector process provided that the function can be adequately represented through a power-series approximation. The basis of the variance reduction strategy is to appropriately augment the governing system equations and then weakly replace the stochastic forcing function (which is typically a filtered white noise process) through a set of variance-reduced functions. In the process, the additional computational cost due to system augmentation is far smaller than the accrued advantages due to a drastically reduced ensemble size. Indeed, we show that the proposed method appears to work satisfactorily even in the special case of the ensemble size being just 1. The variance reduction scheme is first illustrated through applications to a nonlinear Duffing equation driven by additive and multiplicative white noise processes—a problem for which exact stationary solutions are known. This is followed up with applications of the strategy to a few higher-dimensional systems, i.e., 2- and 3-dof nonlinear oscillators under additive white noises.

MonteCarlo algorithms are developed to calculate the ensemble-average particle leakage through the boundaries of a 2-D binary stochastic material. The mixture is specified within a rectangular area and consists of a fixed number of disks of constant radi...

We developed a 2D StochasticMonteCarlo model for Cosmic Rays propagation in the Heliosphere. The model solves numerically the transport equation of particles in the heliosphere, including major processes affecting the heliospheric particle transport: diffusion, convection, adiabatic energy losses and drift of particles. We evaluated the modulated flux at several distances from the Sun (i.e. at planetary distances) and

P. Bobik; M. J. Boschini; M. Gervasi; D. Grandi; P. G. Rancoita

Particle transport through binary stochastic mixtures has received considerable research attention in the last two decades. Zimmerman and Adams proposed a MonteCarlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another MonteCarlo algorithm (Algorithm B) that should be more accurate as a result of improved local material realization modeling. Zimmerman and Adams numerically confirmed these aspects of the MonteCarlo algorithms by comparing the reflection and transmission values computed using these algorithms to a standard suite of planar geometry binary stochastic mixture benchmark transport solutions. The benchmark transport problems are driven by an isotropic angular flux incident on one boundary of a binary Markovian statistical planar geometry medium. In a recent paper, we extended the benchmark comparisons of these MonteCarlo algorithms to include the scalar flux distributions produced. This comparison is important, because as demonstrated, an approximate model that gives accurate reflection and transmission probabilities can produce unphysical scalar flux distributions. Brantley and Palmer recently investigated the accuracy of the Levermore-Pomraning model using a new interior source binary stochastic medium benchmark problem suite. In this paper, we further investigate the accuracy of the MonteCarlo algorithms proposed by Zimmerman and Adams by comparing to the benchmark results from the interior source binary stochastic medium benchmark suite, including scalar flux distributions. Because the interior source scalar flux distributions are of an inherently different character than the distributions obtained for the incident angular flux benchmark problems, the present benchmark comparison extends the domain of problems for which the accuracy of these MonteCarlo algorithms has been investigated.

The evolution of two-dimensional drop distributions is simulated in this study using a MonteCarlo method. The stochastic algorithm of Gillespie (1976) for chemical reactions in the formulation proposed by Laurenzi et al. (2002) was used to simulate the kinetic behavior of the drop population. Within this framework, species are defined as droplets of specific size and aerosol composition. The performance of the algorithm was checked by a comparison with the analytical solutions found by Lushnikov (1975) and Golovin (1963) and with finite difference solutions of the two-component kinetic collection equation obtained for the Golovin (sum) and hydrodynamic kernels. Very good agreement was observed between the MonteCarlo simulations and the analytical and numerical solutions. A simulation for realistic initial conditions is presented for the hydrodynamic kernel. As expected, the aerosol mass is shifted from small to large particles due to collection process. This algorithm could be extended to incorporate various properties of clouds such several crystals habits, different types of soluble CCN, particle charging and drop breakup.

Various MonteCarlo programs, developed either by small groups or widely available, have been used simulate decays of radioactive chains, from the original parent nucleus to the final stable isotopes. These chains include uranium, thorium, radon, and others, and generally have long-lived parent nuclei. Generating decays within these chains requires a certain amount of computing overhead related to simulating unnecessary decays, time-ordering the final results in post-processing, or both. We present a combination analytic/stochastic algorithm for creating a time-ordered set of decays with position and time correlations, and starting with an arbitrary source age. Thus the simulation costs are greatly reduced, while at the same time avoiding chronological post-processing. We discuss optimization methods within the approach to minimize calculation time, and extension of the algorithm to include various source types.

This paper examines the power of Markov chain MonteCarlo methods to tackle the `inverse' problem of stochastic population modelling. Namely, given a partial series of event-time observations, believed governed by a known process, what range of model parameters might plausibly explain it? This problem is first introduced in the simple context of an immigration-death process, in which only deaths are recorded, and is then extended through the introduction of birth, standard and power-law logistic growth, and an `odd-even effects' quantum optics model. The results show that simple Metropolis Hastings samplers can be applied to provide useful information on models containing a high degree of complexity. Specific problems highlighted include: the potentially poor mixing qualities of simple Metropolis Hastings samplers; and, that heavily non-symmetric full likelihood surfaces may inflict substantial bias on their associated marginal distributions.

Dynamics of the O-H...O bond proton glass of the type M1-x(NW4)xW2AO4 (M=Rb or K, W=H or D, A=P or As) has been simulated using the MonteCarlostochastic-dynamics method that allows one to simulate real time dynamics. The simulation is based both on microscopic interactions of protons and on interaction with an external static electric field. The polarization decay and response to step field has been compared with the Kohlrausch-Williams-Watts stretched exponential form and with predictions of a microscopic ``bound charge carrier'' model. Studying the proton dynamics by a field cooling simulation has revealed nonergodic behavior at low temperatures.

Reduced order modeling is often employed to decrease the computational cost of numerical solutions of parametric Partial Differential Equations. Reduced basis, balanced truncation, projections methods are among the most studied techniques to achieve model reduction. We study the applicability of snapshot-based Proper Orthogonal Decomposition (POD) to MonteCarlo (MC) simulations applied to the solution of the stochastic groundwater flow problem. POD model reduction is obtained by projecting the model equations onto a space generated by a small number of basis functions (principal components). These are obtained upon exploring the solution (probability) space with snapshots, i.e., system states obtained by solving the original process-based equations. The reduced model is then employed to complete the ensemble by adding multiple realizations. We apply this technique to a two dimensional simulation of steady state saturated groundwater flow, and explore the sensitivity of the method to the number of snapshots and associated principal components in terms of accuracy and efficiency of the overall MC procedure. In our preliminary results, we distinguish the problem of heterogeneous recharge, in which the stochastic term is confined to the forcing function (additive stochasticity), from the case of heterogeneous hydraulic conductivity, in which the stochastic term is multiplicative. In the first scenario, the linearity of the problem is fully exploited and the POD approach yields accurate and efficient realizations, leading to substantial speed up of the MC method. The second scenario poses a significant challenge, as the adoption of a few snapshots based on the full model does not provide enough variability in the reduced order replicates, thus leading to poor convergence of the MC method. We find that increasing the number of snapshots improves the convergence of MC but only for large integral scales of the log-conductivity field. The technique is then extended to take full advantage of the solution of moment differential equations of groundwater flow.

We study the applicability of Proper Orthogonal Decomposition (POD) techniques to reduce the computational burden associated with MonteCarlo (MC) iterations, which are typically used in the solution of the stochastic groundwater flow problem. We consider a two-dimensional saturated flow scenario, depicting steady-state groundwater flow around a pumping well within an aquifer characterized by deterministic distributions of transmissivity and boundary conditions and subject to uncertain and spatially distributed areal recharge. The latter is modeled as a stochastic random process which is fully characterized by its mean and variogram. Key moments and the complete probability distribution of state variables, i.e., hydraulic head and flux, are typically computed by means of computationally intensive MC simulations. Our strategy is to substitute a reduced model in place of the full groundwater model in the MC iterations, thus achieving computational savings while keeping the accuracy of the calculated statistics under control. To this aim, a reduced model is developed by employing POD to project the model equations onto the space generated by a small number of (randomly selected) full model MC realizations (snapshots). The reduced model is then used to generate the remaining subset of the ensemble. Our preliminary results show that the eigenvalues generated by principal component analysis of the snapshot set tend to zero as the dimension of the snapshot set increases. This suggests that a sufficiently large number of snapshots can control the accuracy of the reduced model. Our tests show that POD-based MC allows obtaining accurate estimates of the probability distribution and leading moments of hydraulic heads by means of only a few MC full model runs: in our cases only 5% of the MC ensemble size (200 snapshots instead of a 5000 realizations ensemble) allows reliable characterization of the hydraulic head probability distribution. This results in considerable CPU time savings in the presence of moderate to large spatial variability of groundwater recharge.

We present an algorithm for the analytic continuation of imaginary-time quantum MonteCarlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard MonteCarlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum MonteCarlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation. PMID:20866348

This study develops Bayesian methods for estimating the parameters of astochastic switching regression model. Markov Chain MonteCarlo methods, dataaugmentation, and Gibbs sampling are used to facilitate estimation of theposterior means. The main feature of these methods is that the posterior meansare estimated by the ergodic averages of samples drawn from conditionaldistributions, which are relatively simple in form and more

Higher-level decisions for AiTR (aided target recognition) networks have been made so far in our community in an ad-hoc fashion. Higher level decisions in this context do not involve target recognition performance per se, but other inherent output measures of performance, e.g., expected response time, long-term electronic memory required to achieve a tolerable level of image losses. Those measures usually require the knowledge associated with the steady-state, stochastic behavior of the entire network, which in practice is mathematically intractable. Decisions requiring those and similar output measures will become very important as AiTR networks are permanently deployed to the field. To address this concern, I propose to model AiTR systems as an open stochastic-process network and to conduct MonteCarlo simulations based on this model to estimate steady state performances. To illustrate this method, I modeled as proposed a familiar operational scenario and an existing baseline AiTR system. Details of the stochastic model and its corresponding Monte-Carlo simulation results are discussed in the paper.

MonteCarlo (MC) simulation of most spatially distributed systems is plagued by several problems, namely, execution of one process at a time, large separation of time scales of various processes, and large length scales. Recently, a coarse-grained MonteCarlo (CGMC) method was introduced that can capture large length scales at reasonable computational times. An inherent assumption in this CGMC method revolves around a mean-field closure invoked in each coarse cell that is inaccurate for short-ranged interactions. Two new approaches are explored to improve upon this closure. The first employs the local quasichemical approximation, which is applicable to first nearest-neighbor interactions. The second, termed multiscale CGMC method, employs singular perturbation ideas on multiple grids to capture the entire cluster probability distribution function via short microscopic MC simulations on small, fine-grid lattices by taking advantage of the time scale separation of multiple processes. Computational strategies for coupling the fast process at small length scales (fine grid) with the slow processes at large length scales (coarse grid) are discussed. Finally, the binomial ?-leap method is combined with the multiscale CGMC method to execute multiple processes over the entire lattice and provide additional computational acceleration. Numerical simulations demonstrate that in the presence of fast diffusion and slow adsorption and desorption processes the two new approaches provide more accurate solutions in comparison to the previously introduced CGMC method.

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on MonteCarlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER MonteCarlo code, cover the fundamental theory, concepts, and practices for MonteCarlo analysis. In particular, a thorough grounding in the basic fundamentals of MonteCarlo methods is presented, including random number generation, random sampling, the MonteCarlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to MonteCarlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

The MORSE code is a large general-use multigroup MonteCarlo code system. Although no claims can be made regarding its superiority in either theoretical details or MonteCarlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the ...

The MORSE code is a large general-use multigroup MonteCarlo code system. Although no claims can be made regarding its superiority in either theoretical details or MonteCarlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used MonteCarlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

A novel methodology that combines recent advances in computational statistics and reduced-order modeling is presented to explore the application of Bayesian statistical infer- ence to a stochastic inverse problem in radiative heat transfer. The underlying objective of this work is to reveal the potential of using statistical approaches, mainly Bayesian com- putational statistics and spatial statistics, to solve data-driven stochastic

We investigate the folding dynamics of the plant-seed protein Crambin in a liquid environment, that usually happens to be water with some certain viscosity. To take into account the viscosity, necessitates a stochastic approach. This can be summarized by a 2D-Langevin equation, even though the simulation is still carried out in 3D. Solution of the Langevin equation will be the basic task in order to proceed with a Molecular Dynamics simulation, which will accompany a delicate MonteCarlo technique. The potential wells, used to engineer the energy space assuming the interaction of monomers constituting the protein-chain, are simply modeled by a combination of two parabola. This combination will approximate the real physical interactions, that is given by the well known Lennard-Jones potential. Contributions to the total potential from torsion, bending and distance dependent potentials are good to the fourth nearest neighbor. The final image is in a very good geometric agreement with the real shape of the protein chain, which can be obtained from the protein data bank. The quantitative measure of this agreement is the similarity parameter with the native structure, which is found to be 0.91 < 1 for the best sample. The folding time can be determined from Debye-relaxation process. We apply two regimes and calculate the folding time, corresponding to the elastic domain mode, which yields 5.2 ps for the same sample.

We propose a modified power method for computing the subdominant eigenvalue ?2 of a matrix or continuous operator. While useful both deterministically and stochastically, we focus on defining simple MonteCarlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfunction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our MonteCarlo methods by computing ?2 of various Markov chain transition matrices. As |?2| of this matrix controls the rate at which MonteCarlo sampling relaxes to a stationary condition, its computation also enabled us to compare efficiencies of several MonteCarlo algorithms as applied to two quite different types of problems. We first computed ?2 for several one- and two-dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as functions of temperature and applied magnetic field. Next, we computed ?2 for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis algorithm as a function of temperature and the maximum allowable step size ? . Based on the ?2 criterion, we found for the Ising models that small lattices appear to give an adequate picture of comparative efficiency and that the heat-bath algorithm is more efficient than the Metropolis algorithm only at low temperatures where both algorithms are inefficient. For the harmonic trap problem, we found that the traditional rule of thumb of adjusting ? so that the Metropolis acceptance rate is around 50% is often suboptimal. In general, as a function of temperature or ? , ?2 for this model displayed trends defining optimal efficiency that the acceptance ratio does not. The cases studied also suggested that MonteCarlo simulations for a continuum model are likely more efficient than those for a discretized version of the model.

A stochastic model of the resistive switching mechanism in bipolar metal-oxide based resistive random access memory (RRAM)\\u000a is presented. The distribution of electron occupation probabilities obtained is in agreement with previous work. In particular,\\u000a a low occupation region is formed near the cathode. Our simulations of the temperature dependence of the electron occupation\\u000a probability near the anode and the cathode

Alexander Makarov; Viktor Sverdlov; Siegfried Selberherr

This paper is intended to be a tutorial on multigrid MonteCarlo techniques, illustrated with two examples. Path-integral quantum MonteCarlo is seen to take only a finite amount of computer time even as the paths are discretized on infinitesimally small scales. A method for eliminating critical slowing down completely/emdash/even for models with discrete degrees of freedom, as in Potts models, or discrete excitations, such as isolated vortices in the XY model/emdash/is presented. 11 refs., 1 fig.

In the spirit of Gillespie's stochastic approach we have formulated a theory to explore the advancement of the interfacial enzyme kinetics at the single enzyme level which is ultimately utilized to obtain the ensemble average macroscopic feature, lag-burst kinetics. We have provided a theory of the transition from the lag phase to the burst phase kinetics by considering the gradual development of electrostatic interaction among the positively charged enzyme and negatively charged product molecules deposited on the phospholipid surface. It is shown that the different diffusion time scales of the enzyme over the fluid and product regions are responsible for the memory effect in the correlation of successive turnover events of the hopping mode in the single trajectory analysis which again is reflected on the non-Gaussian distribution of turnover times on the macroscopic kinetics in the lag phase unlike the burst phase kinetics.

Background Although many infections that are transmissible from person to person are acquired through direct contact between individuals, a minority, notably pulmonary tuberculosis (TB), measles and influenza are known to be spread by the airborne route. Airborne infections pose a particular threat to susceptible individuals whenever they are placed together with the index case in confined spaces. With this in mind, waiting areas of healthcare facilities present a particular challenge, since large numbers of people, some of whom may have underlying conditions which predispose them to infection, congregate in such spaces and can be exposed to an individual who may be shedding potentially pathogenic microorganisms. It is therefore important to understand the risks posed by infectious individuals in waiting areas, so that interventions can be developed to minimise the spread of airborne infections. Method A stochasticMonteCarlo model was constructed to analyse the transmission of airborne infection in a hypothetical 132 m3 hospital waiting area in which occupancy levels, waiting times and ventilation rate can all be varied. In the model the Gammaitoni-Nucci equation was utilized to predict probability of susceptible individuals becoming infected. The model was used to assess the risk of transmission of three infectious diseases, TB, influenza and measles. In order to allow for stochasticity a random number generator was applied to the variables in the model and a total of 10000 individual simulations were undertaken. The mean quanta production rates used in the study were 12.7, 100 and 570 per hour for TB, influenza and measles, respectively. Results The results of the study revealed the mean probability of acquiring a TB infection during a 30-minute stay in the waiting area to be negligible (i.e. 0.0034), while that for influenza was an order of magnitude higher at 0.0262. By comparison the mean probability of acquiring a measles infection during the same period was 0.1349. If the duration of the stay was increased to 60 minutes then these values increased to 0.0087, 0.0662 and 0.3094, respectively. Conclusion Under normal circumstances the risk of acquiring a TB infection during a visit to a hospital waiting area is minimal. Likewise the risks associated with the transmission of influenza, although an order of magnitude greater than those for TB, are relatively small. By comparison, the risks associated with measles are high. While the installation of air disinfection may be beneficial, when seeking to prevent the transmission of airborne viral infection it is important to first minimize waiting times and the number of susceptible individuals present before turning to expensive technological solutions.

Every neutrino experiment requires a MonteCarlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

A recently developed algorithm for simulating statistical systems is discussed. The procedure interpolates between molecular dynamics methods and canonical MonteCarlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena. 8 refs., 2 figs.

Every neutrino experiment requires a MonteCarlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

Dytman, Steven [Department.of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States)

We propose a modified power method for computing the subdominant eigenvalue ?{2} of a matrix or continuous operator. While useful both deterministically and stochastically, we focus on defining simple MonteCarlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfunction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our MonteCarlo methods by computing ?{2} of various Markov chain transition matrices. As |?{2}| of this matrix controls the rate at which MonteCarlo sampling relaxes to a stationary condition, its computation also enabled us to compare efficiencies of several MonteCarlo algorithms as applied to two quite different types of problems. We first computed ?{2} for several one- and two-dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as functions of temperature and applied magnetic field. Next, we computed ?{2} for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis algorithm as a function of temperature and the maximum allowable step size ?. Based on the ?{2} criterion, we found for the Ising models that small lattices appear to give an adequate picture of comparative efficiency and that the heat-bath algorithm is more efficient than the Metropolis algorithm only at low temperatures where both algorithms are inefficient. For the harmonic trap problem, we found that the traditional rule of thumb of adjusting ? so that the Metropolis acceptance rate is around 50% is often suboptimal. In general, as a function of temperature or ? , ?{2} for this model displayed trends defining optimal efficiency that the acceptance ratio does not. The cases studied also suggested that MonteCarlo simulations for a continuum model are likely more efficient than those for a discretized version of the model. PMID:21230207

Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of MonteCarlo techniques or additional approximations) for ?-? scattering.Purpose: Develop, improve, and test the MonteCarlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: MonteCarlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.

A MonteCarlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct MonteCarlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the MonteCarlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

Prior work demonstrated the importance of nuclear scattering to fusion product energy deposition in hot plasmas. This suggests careful examination of nuclear physics details in burning plasma simulations. An existing MonteCarlo fast ion transport code is being expanded to be a test bed for this examination. An initial extension, the energy deposition of fast alpha particles in a hot deuterium plasma, is reported. The deposition times and deposition ranges are modified by allowing nuclear scattering. Up to 10% of the initial alpha particle energy is carried to greater ranges and times by the more mobile recoil deuterons. 4 refs., 5 figs., 2 tabs.

Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the MonteCarlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

A MonteCarlo code is developed to study the action of particles in Boron Neutron Capture Therapy (BNCT). Our aim is to calculate the probability of dissipating a lethal dose in cell nuclei. Cytoplasmic and nuclear membranes are considered as non-concentric ellipsoids. All geometrical parameters may be adjusted to fit actual configurations. The reactions 10B(n,??)^7Li and 14N(n,p)14C create heavy ions which slow down losing their energy. Their trajectories can be simulated taking into account path length straggling. The contribution of each reaction to the deposited dose in different cellular compartments can be studied and analysed for any distribution of 10B. Un code de simulation Monte-Carlo est développé pour étudier les modalités d'action de la Thérapie par Capture de Neutrons (TCN) sur le {10}B. L'objectif est le calcul de la probabilité de dépôt d'une dose létale dans les noyaux cellulaires. Les membranes cytoplasmique et nucléaire sont schématisées par des ellipsoïdes non concentriques dont tous les paramètres sont ajustables à des configurations réelles. Les réactions considérées, 10B(n,??)7 et 14N(n,p)14, produisent des ions dont les trajectoires peuvent être simulées en considérant les fluctuations sur les longueurs de parcours. Les contributions respectives de chaque réaction aux doses déposées dans les divers compartiments cellulaires peuvent être étudiées et analysées en fonction des distributions de 10B.

MCMini is a proof of concept that demonstrates the possibility for MonteCarlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other MonteCarlo codes.

The stochastic-gauge representation is a method of mapping the equation of motion for the quantum mechanical density operator onto a set of equivalent stochastic differential equations. One of the stochastic variables is termed the 'weight', and its magnitude is related to the importance of the stochastic trajectory. We investigate the use of MonteCarlo algorithms to improve the sampling of the weighted trajectories and thus reduce sampling error in a simulation of quantum dynamics. The method can be applied to calculations in real time, as well as imaginary time for which MonteCarlo algorithms are more-commonly used. The Monte-Carlo algorithms are applicable when the weight is guaranteed to be real, and we demonstrate how to ensure this is the case. Examples are given for the anharmonic oscillator, where large improvements over stochastic sampling are observed.

Dowling, Mark R. [ARC Centre of Excellence for Quantum-Atom Optics, Department of Physics, School of Physical Sciences, University of Queensland, Brisbane, Qld 4072 (Australia)]. E-mail: dowling@physics.uq.edu.au; Davis, Matthew J. [ARC Centre of Excellence for Quantum-Atom Optics, Department of Physics, School of Physical Sciences, University of Queensland, Brisbane, Qld 4072 (Australia); Drummond, Peter D. [ARC Centre of Excellence for Quantum-Atom Optics, Department of Physics, School of Physical Sciences, University of Queensland, Brisbane, Qld 4072 (Australia); Corney, Joel F. [ARC Centre of Excellence for Quantum-Atom Optics, Department of Physics, School of Physical Sciences, University of Queensland, Brisbane, Qld 4072 (Australia)

We present a learning algorithm for hidden Markov models (HMM) with continuous state and observation spaces. All necessary probability density functions are approximated using samples, along with density trees generated from such samples. A MonteCarlo ve...

Markov chain MonteCarlo sampling methods often suffer from long correlation times. Consequently, these methods must be run for many steps to generate an independent sample. In this paper, a method is proposed to overcome this difficulty. The method utilizes information from rapidly equilibrating coarse Markov chains that sample marginal distributions of the full system. This is accomplished through exchanges between the full chain and the auxiliary coarse chains. Results of numerical tests on the bridge sampling and filtering/smoothing problems for a stochastic differential equation are presented. PMID:17640896

Markov chain MonteCarlo sampling methods often suffer from long correlation times. Consequently, these methods must be run for many steps to generate an independent sample. In this paper, a method is proposed to overcome this difficulty. The method utilizes information from rapidly equilibrating coarse Markov chains that sample marginal distributions of the full system. This is accomplished through exchanges between the full chain and the auxiliary coarse chains. Results of numerical tests on the bridge sampling and filtering/smoothing problems for a stochastic differential equation are presented.

Two different MonteCarlo methods have been developed for benchmark computations of small-sample-worths in simplified geometries. The first is basically a standard MonteCarlo perturbation method in which neutrons are steered towards the sample by roulett...

A novel parallel kinetic MonteCarlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

A MonteCarlo algorithm is presented which allows the efficient simulation of processes in which many particles are produced, whose total mass may be comparable to the total energy. The importance of such algorithms for phenomenological studies of the conjectured (B + L)-violating processes at the LHC or SSC is argued.

We present a MonteCarlo method for the direct evaluation of the differ- ence between the free energies of two crystal structures. The method is built on a lattice-switch transformation that maps a configuration of one structure onto a candidate configuration of the other by 'switching' one set of lattice vectors for the other, while keeping the displacements with respect

A. D. Bruce; A. N. Jackson; G. J. Ackland; N. B. Wilding

The results of current research in the development of a CRAY algorithm for time-dependent MonteCarlo photon radiation transport is presented. The method that has been developed is a fully vectorized particle-vector scheme. This technique tracks groups of...

F. W. Bobrowicz J. E. Lynch K. J. Fisher J. E. Tabor

This is the description and instructions for the MonteCarlo Estimation of Pi applet. It is a simulation of throwing darts at a figure of a circle inscribed in a square. It shows the relationship between the geometry of the figure and the statistical outcome of throwing the darts.

|The meaningful investigation of many problems in statistics can be solved through MonteCarlo methods. MonteCarlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using MonteCarlo simulation, the values of a statistic…

Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain MonteCarlo methods utilizing importance sampling driven by forward simulators to minimize time spent search very large state spaces. The stochastic engine rapidly chooses among a very large number of hypothesized states and selects those that are consistent (within error) with all the information at hand. Predicted measurements from the simulator are used to estimate the likelihood of actual measurements, which in turn reduces the uncertainty in the original sample space via a conditional probability method called Bayesian inferencing. This highly efficient, staged Metropolis-type search algorithm allows us to address extremely complex problems and opens the door to solving many data-driven, nonlinear, multidimensional problems. A key challenge has been developing representation methods that integrate the local details of real data with the global physics of the simulations, enabling supercomputers to efficiently solve the problem. Development focused on large-scale problems, and on examining the mathematical robustness of the approach in diverse applications. Multiple data types were combined with large-scale simulations to evaluate systems with {approx}{sup 10}20,000 possible states (detecting underground leaks at the Hanford waste tanks). The probable uses of chemical process facilities were assessed using an evidence-tree representation and in-process updating. Other applications included contaminant flow paths at the Savannah River Site, locating structural flaws in buildings, improving models for seismic travel times systems used to monitor nuclear proliferation, characterizing the source of indistinct atmospheric plumes, and improving flash radiography. In the course of developing these applications, we also developed new methods to cluster and analyze the results of the state-space searches, as well as a number of algorithms to improve the search speed and efficiency. Our generalized solution contributes both a means to make more informed predictions of the behavior of very complex systems, and to improve those predictions as events unfold, using new data in real time.

Glaser, R E; Johannesson, G; Sengupta, S; Kosovic, B; Carle, S; Franz, G A; Aines, R D; Nitao, J J; Hanley, W G; Ramirez, A L; Newmark, R L; Johnson, V M; Dyer, K M; Henderson, K A; Sugiyama, G A; Hickling, T L; Pasyanos, M E; Jones, D A; Grimm, R J; Levine, R A

This paper concerns kinetic MonteCarlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection MonteCarlo and a second that combines inverted lists with the Marsaglia Norman Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.

MonteCarlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit MonteCarlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

MonteCarlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit MonteCarlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

Zimmerman, George B. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

The issues affecting implementation of parallel algorithms for large-scale engineering MonteCarlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.

In this paper we consider mixed (fast stochastic approximation and deterministic refinement) algorithms for Matrix Inversion\\u000a (MI) and Solving Systems of Linear Equations (SLAE). MonteCarlo methods are used for the stochastic approximation, since\\u000a it is known that they are very efficient in finding a quick rough approximation of the element or a row of the inverse matrix\\u000a or finding

\\u000a MonteCarlo Simulation (MCS) offers a powerful means for modeling the stochastic failure behaviour of engineered structures,\\u000a systems and components (SSC). This paper summarises current work on advanced MCS methods for reliability estimation and failure\\u000a prognostics.

Monte Python is a parameter inference code which combines the flexibility of the python language and the robustness of the cosmological code CLASS into a simple and easy to manipulate MonteCarlo Markov Chain code.

Audren, Benjamin; Lesgourgues, Julien; Benabed, Karim; Prunet, Simon

MonteCarlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since MonteCarlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized MonteCarlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure MonteCarlo codes for vectorization. Based on the successful efforts to date, it may be concluded that MonteCarlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing MonteCarlo, since the step from vector to multitasked vector is relatively straightforward.

This is a summary of 'the path forward' discussion session of the NuInt09 workshop which focused on MonteCarlo event generators. The main questions raised as part of this discussion are: how to make MonteCarlo generators more reliable and how important it is to work on a universal MonteCarlo generator of events? In this contribution, several experts in the field summarize their views, as presented at the workshop.

Andreopoulos, Costas [Rutherford Appleton Laboratory, STFC Oxfordshire OX11 0QX (United Kingdom); Gallagher, Hugh [Tufts University, Medford, Massachusetts (United States); Hayato, Yoshinari [Kamioka Observatory, ICRR, University of Tokyo Higashi-Mozumi 456, Kamioka-cho, Hida-city Gifu 506-1205 (Japan); Sobczyk, Jan T. [Institute of Theoretical Physics, Wroclaw, University Poland (Poland); Walter, Chris [Department of Physics, Duke University, Durham, NC 27708 (United States); Zeller, Sam [Los Alamos National Laboratory, Los Alamos, NM (United States)

MonteCarlo applications are widely perceived as computationally intensive but naturally parallel. Therefore, they can be effectively executed on the grid using the dynamic bag-of-work model. We improve the efficiency of the subtask-scheduling scheme by using an N-out-of-M strategy, and develop a MonteCarlo-specific lightweight checkpoint technique, which leads to a performance improvement for MonteCarlo grid computing. Also, we

An acceleration algorithm to address the problem of multiple time scales in variational MonteCarlo simulations is presented. After a first attempted move has been rejected, the delayed rejection algorithm attempts a second move with a smaller time step, so that even moves of the core electrons can be accepted. Results on Be and Ne atoms as test cases are presented. Correlation time and both average accepted displacement and acceptance ratio as a function of the distance from the nucleus evidence the efficiency of the proposed algorithm in dealing with the multiple time scales problem.

Particle fluxes on surfaces are difficult to calculate with MonteCarlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which MonteCarlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

Favorite, Jeffrey A [Los Alamos National Laboratory

A single scatter electron MonteCarlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle MonteCarlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

Svatos, M.M. [Lawrence Livermore National Lab., CA (United States)|Wisconsin Univ., Madison, WI (United States)

The reptation quantum MonteCarlo (RQMC) algorithm of Baroni and Moroni (Phys Rev Lett, 1999, 82, 4745) is a recent and promising development. In this approach, one generates a large number of reptiles: sets of electron configurations arising from consecutive drift-diffusion moves. Within the fixed-node approximation, one extracts estimates of the exact energy from reptiles' heads and tails (their first and last configurations of electrons, respectively), and estimates expectation values for operators that do not commute with the Hamiltonian, from their middle configurations. An advantage over conceptually equivalent algorithms is that each estimate is free from population control bias. The time-step bias, however, may accumulate, adversely affecting one's ability to accurately estimate physical properties of atoms and molecules. For this purpose we propose an alternative algorithm, "head-tail adjusted" reptation quantum MonteCarlo, engineered to remedy this deficiency, while still simulating the target distribution for RQMC. The effectiveness of our approach is demonstrated by an application to ground-state LiH. We estimate the fixed-node energy with improved reliability, without adversely affecting the quality of other, nonenergy-related properties' estimates.

Yuen, Wai Kong; Oblinsky, Daniel G.; Giacometti, Robert D.; Rothstein, Stuart M.

The paper reports MonteCarlo modeling of backscatter returns of a laser beam from ocean water. The MonteCarlo code used for simulations employs the Stokes vector formalism to account for polarization effects at each scattering event, from either water or suspended particles, and at reflection from or transmission through a stochastic sea surface. Scattering from water and suspended matter

Alexei Kouzoubov; Michael J. Brennan; John C. Thomas; Ralph H. Abbot

This paper will discuss recent improvements made to the MonteCarlo Scene (MCScene) code, a high fidelity model for full optical spectrum (UV through LWIR) hyperspectral image (HSI) simulation. MCScene provides an accurate, robust, and efficient means to generate HSI scenes for algorithm validation. MCScene utilizes a Direct Simulation MonteCarlo (DSMC) approach for modeling 3D atmospheric radiative transfer (RT)

Robert Sundberg; Steven Richtsmeier; Raymond Haren

The author's main purpose is to review the techniques and applications of the MonteCarlo method in medical radiation physics since Raeside's review article in 1976. Emphasis is given to applications where proton and\\/or electron transport in matter is simulated. Some practical aspects of MonteCarlo practice, mainly related to random numbers and other computational details, are discussed in connection

The MonteCarlo method describes a very broad area of science, in which many processes, physical systems and phenomena are simulated by statistical methods employing random numbers. The general idea of MonteCarlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based

Good wave functions play an important role in Fixed-Node Quantum MonteCarlo simulations. Typical wave function optimization methods minimize the energy or variance within Variational MonteCarlo. We present a method to minimize the fixed node energy directly in Diffusion MonteCarlo(DMC). The fixed node energy, together with its derivatives with respect to the variational parameters in the wave function, is calculated. The derivative information is used to dynamically optimize variational parameters during a single DMC run using the Stochastic Gradient Approximation (SGA) method. We give results for the Be atom with a single variational parameter, and the Li2 molecule, with multiple parameters. (One of the Authors, C.L. would like to thank Claudia Filippi for providing a good Li2 wave function and many valuable discussions.)

MC21 is a new MonteCarlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the MonteCarlo transport kernel of the broader Common MonteCarlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a MonteCarlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the MonteCarlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.

Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H

The relationship between tumor response and radiation is currently modeled as dose, quantified on the mm or cm scale through measurement or simulation. This does not take into account modern knowledge of cancer, including tissue heterogeneities and repair mechanisms. We perform MonteCarlo simulations utilizing Geant4 to model radiation treatment on a cellular scale. Biological measurements are correlated to simulated results, primarily the energy deposit in nuclear volumes. One application is modeling dose enhancement through the use of high-Z materials, such gold nanoparticles. The model matches in vitro data and predicts dose enhancement ratios for a variety of in vivo scenarios. This model shows promise for both treatment design and furthering our understanding of radiobiology.

Ackerman, Nicole; Bazalova, Magdalena; Chang, Kevin; Graves, Edward

Thermodynamic fluctuations are significant at microscopic scales even when hydrodynamic transport models (i.e., Navier-Stokes equations) are still accurate; a well-known example is Rayleigh scattering, which makes the sky blue. Interesting phenomena also appear in non-equilibrium systems, such as the enhancement of diffusion during mixing due to the correlation of velocity and concentration fluctuations. Direct Simulation MonteCarlo (DSMC) simulations are useful in the study of hydrodynamic fluctuations due to their computational efficiency and ability to model molecular detail, such as internal energy and chemical reactions. More recently, finite volume schemes based on the fluctuating hydrodynamic equations of Landau and Lifshitz have been formulated and validated by comparisons with DSMC simulations. This paper discusses some of the relevant numerical issues and physical effects investigated using DSMC and stochastic Navier-Stokes simulations. This paper also presents the multi-component fluctuating hydrodynamic equations, including chemical reactions, and illustrates their numerical solutions in the study of Turing patterns. We find that behind a propagating reaction front, labyrinth patterns are produced due to the coupling of reactions and species diffusion. In general, fluctuations accelerate the propagation speed of the leading front but differences are observed in the Turing patterns depending on the origin of the fluctuations (stochastic hydrodynamic fluxes versus Langevin chemistry).

Balakrishnan, Kaushik; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

This report gives an introduction to a Bayesian probabilistic approach to modeling a dynamic system, with emphasis on stochastic methods for posterior inference. The Bayesian paradigm is a powerful tool to combine observed data along with prior knowledge to gain a current (probabilistic) understanding of unknown model parameters. In particular, it provides a very natural framework for updating the state of knowledge in a dynamic system. For complex systems, such updating needs to be carried out via stochastic sampling of unknown model parameters. An overview is given of the well established Markov chain MonteCarlo (MCMC) approach to achieve this and of the more recent sequential MonteCarlo (SMC) approach, which is better suited for dynamic systems. Examples are provided, including an application to event reconstruction for an atmospheric release.

Abstract A Markov Chain MonteCarlo Analysis of Credit Spread Models This paper examines the daily time-series properties of aggregate corporate bond credit spreads, using three Merrill Lynch option-adjusted spread indices. We analyze various specications,of stochastic volatility models for the daily changes in the logarithm of credit spreads. Specically, we consider specications,that incorporate the following four features: correlations between the

Based on the fixed-step random walk procedure a MonteCarlo algorithm for the solution of anisotropic heat conduction is presented. It is shown that the MonteCarlo solution is attainable only for a specified range of solid thermal conductivities. It is also illustrated that by following two simple clues considerable reduction in computation time may be achieved. Finally, steady-state temperature distribution, obtained by the MonteCarlo calculations, is presented for a two-dimensional anisotropic solid having simple geometry and boundary conditions.

an expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S MonteCarlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the MonteCarlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient MonteCarlo calculation.

The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum MonteCarlo, which appeared in the Journal of Statistical Physics. (GHT)

Object-Oriented Programming techniques are explored with an eye toward applications in High Energy Physics codes. Two prototype examples are given: McOOP (a particle MonteCarlo generator) and GISMO (a detector simulation/analysis package). (ERA citation ...

W. B. Atwood R. Blankenbecler P. Kunz T. Burnett K. M. Storr

A numerical procedure for solving deconvolution problems is presented. The procedure is based on the MonteCarlo method, which statistically estimates each element in the deconvolved excitation. A discrete Fourier transform technique is used to improve th...

A method which combines the accuracy of MonteCarlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of MonteCarlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and

MCNPX (MonteCarlo N-Particle eXtended) is a general-purpose MonteCarlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4c and has been upgraded to most MCNP5 capabilities. MCNP is a highly

Laurie S. Waters; Gregg W. McKinney; Joe W. Durkee; Michael L. Fensin; John S. Hendricks; Michael R. James; Russell C. Johns; Denise B. Pelowitz

A MonteCarlo model has been developed for optical coherence tomography (OCT). A geometrical optics implementation of the OCT probe with low-coherence interferometric detection was combined with three-dimensional stochasticMonteCarlo modelling of photon propagation in the homogeneous sample medium. Optical properties of the sample were selected to simulate intralipid and blood, representing moderately (g D 0:7) and highly (g

Derek J Smithies; Tore Lindmoyz; Zhongping Chen; Thomas E Milner

The purpose of a MonteCarlo visualization tool is to aid in the generation of the input file while enabling the user to efficiently debug the input file and to optionally allow the user to display output information including random walks. This paper will provide an overview of three different aspects of MonteCarlo code visualization: (1) input file creation; (2) geometry visualization; and (3) output visualization. A brief description of some of the tools available in each area will be presented. However, the focus will be on the capabilities of the MCNP Visual Editor because it is the code most familiar to the authors.

Schwarz, Randolph A. (BATTELLE (PACIFIC NW LAB)); Carter, Lee (Carter M.C. Analysis, Inc.); Kling, A., et al.

The recently reported method for computing thermal desorption rates via a MonteCarlo evaluation of the appropriate transition state theory expression (J. E. Adams and J. D. Doll, J. Chem. Phys. 74, 1467 (1980)) is extended, by the use of importance sampling, so as to generate the complete temperature dependence in a single calculation. We also describe a straightforward means of calculating the activation energy for the desorption process within the same MonteCarlo framework. The result obtained in this way represents, for the case of a simple desorptive event, an upper bound to the true value.

We demonstrate that evolutionary MonteCarlo (EMC) can be applied successfully to simulations of protein folding on simple lattice models, and to finding the ground state of a protein. In all cases, EMC is faster than the genetic algorithm and the conventional Metropolis MonteCarlo, and in several cases it finds new lower energy states. We also propose one method for the use of secondary structures in protein folding. The numerical results show that it is drastically superior to other methods in finding the ground state of a protein.

MonteCarlo ray tracing programs are now being used to solve many optical analysis problems in which the entire optomechanical system must be considered. In many analyses, it is desired to consider the effects of diffraction by mechanical edges. Smoothly melding the effects of diffraction, a wave phenomenon, into a ray-tracing program is a significant technical challenge. This paper discusses the suitability of several methods of calculating diffraction for use in ray tracing programs. A method based on the Heisenberg Uncertainty Principle was chosen for use in TracePro, a commercial MonteCarlo ray tracing program, and is discussed in detail.

Freniere, Edward R.; Gregory, G. Groot; Hassler, Richard A.

The sliding friction as a function of scanning velocity at the nanometer scale was simulated based on a modified one-dimensional Tomlinson model. MonteCarlo theory was exploited to describe the thermally activated hopping of the contact atoms, where both backward and forward jumps were allowed to occur. By comparing with the MonteCarlo results, improvements to current semiempirical solutions [E. Riedo , Phys. Rev. Lett. 91, 084502 (2003)] were made. Finally, experimental results of sliding friction on a NaCl(100) as a function of normal load and scanning velocity [E. Gnecco , Phys. Rev. Lett. 84, 1172 (2000)] where successfully simulated.

Furlong, Octavio Javier; Manzi, Sergio Javier; Pereyra, Victor Daniel; Bustos, Victor; Tysoe, Wilfred T.

A review of nonplasma coupled electron/photon transport using MonteCarlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle MonteCarlo code is described. 48 refs.

The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid MonteCarlo (RHMC) algorithm, where Hybrid MonteCarlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

We show that the formalism of tensor-network states, such as the matrix-product states (MPS), can be used as a basis for variational quantum MonteCarlo simulations. Using a stochastic optimization method, we demonstrate the potential of this approach by explicit MPS calculations for the transverse Ising chain with up to N=256 spins at criticality, using periodic boundary conditions and D x D matrices with D up to 48. The computational cost of our scheme formally scales as ND3, whereas standard MPS approaches and the related density matrix renormalization group method scale as ND5 and ND6, respectively, for periodic systems. PMID:18233275

Chemical reactions in molecular crystals, yielding new entities (dimers, trimers,..., polymers) in the original structure, are simulated for the first time by stochasticMonteCarlo methods. The results are compared with those obtained by deterministic methods. They show that numerical simulation is a tool for understanding the evolution of these mixed systems. They are in kinetic and not in thermodynamic control. Reactive site distributions, x-ray diffuse scattering, and chain length distributions can be simulated. Comparisons are made with deterministic models and experimental results obtained in the case of the solid state dimerization of cinnamic acid in the beta phase and in the case of the solid state polymerization of diacetylenes.

Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require MonteCarlo transport techniques. MonteCarlo particle tran...

A solid theoretical basis of the MonteCarlo calculation of ratios is established. The exact distribution density of ratios is developed and discussed. In addition for the MonteCarlo practice a handy approximation formula of the distribution density and ...

This article presents a methodology for checking the existence of the azeotrope and computing its composition, density, and pressure at a given temperature by integrating chemical engineering insights with molecular simulation principles. Liquid-vapor equilibrium points are computed by molecular simulations using the Gibbs ensemble MonteCarlo (GEMC) method at constant volume. The appearance of the azeotropic point is marked by

Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by MonteCarlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for MonteCarlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with MonteCarlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the MonteCarlo-based MLEM reconstruction.

Issues related to distributed-memory multiprocessing as applied to MonteCarlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine.

This paper presents results of a MonteCarlo simulation of eight families of robust regression estimators in various situations. The effects studied include long-tailed error terms, measurement error in the independent variables, various spacings of the independent variables, different sample sizes and correlation between the independent variables. An estimator that combines the best features of several of the estimators is

This project aims to look at the impact made by certain approximations in electron scattering experiments—specifically whether accounting for these approximation errors is necessary. When using a moveable gun mount, the interaction volume can be determined using a line and cylinder approximation. Data is presented comparing this approximation to the actual volume computed using a MonteCarlo method. A uniform

A MonteCarlo calculation is described for the scattering and absorption of nonrelativistic electrons moving through a slab of uniformly distributed material of given atomic number, density, and thickness. We give an elementary discussion of the basic physics necessary for developing a computer simulation of the movement of electrons through the material. A basic program was written for microprocessors which

We perform Monte-Carlo simulations of a realistic model of the perovskite multiferroic manganites RMnO3 (R = Gd, Tb, Dy) in order to obtain the ground state phase diagram. It is shown that the dynamic Dzyaloshinskii-Moriya interaction plays a crucial role in stabilizing the state with coexisting incommensurate magnetic and ferroelectric order.

We perform Monte-Carlo simulations of a realistic model of the perovskite multiferroic manganites RMnO3 (R = Gd, Tb, Dy) in order to obtain the ground state phase diagram. It is shown that the dynamic Dzyaloshinskii-Moriya interaction plays a crucial role in stabilizing the state with coexisting incommensurate magnetic and ferroelectric order.

A new computer system for MonteCarlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a MonteCarlo Experiment; it also encourages the proper design of MonteCarlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. MonteCarlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language.

The accuracy with which the linear polarization sensitivity of gamma-ray Compton polarimeters can be calculated is investigated by the MonteCarlo method. The data from several operating polarimeter configurations are compared with the simulations. The calculated merits of these configurations are also compared.

L. M García-Raffi; J. L Taín; J. Bea; A. Gadea; J. Rico; B. Rubio

Accurate simulation of diagnostics for thermonuclear burn requires detailed modeling of the spatial and energy distributions of particle sources, in-flight reaction kinematics, and Doppler effects. In the ALE multiphysics code HYDRA, this is now achieved using a new MonteCarlo particle transport package based on LLNL's Arrakis library. It tracks neutrons, gammas, and light ions on 2D quadrilateral and 3D

S. M. Sepke; M. V. Patel; M. M. Marinak; M. S. McKinley; M. J. O'Brien; R. J. Procassini

Several tests for cointegration have been suggested in the literature and applied researchers are faced with the problem which test to use. This paper compares the power and the size distortions of cointegration tests with the MonteCarlo method and finds a trade-off between power and size distortions. Stock and Watson's (1988) SW and Phillips and Ouliaris' (1990) Pz tests

MonteCarlo ray tracing programs are now being used to solve many optical analysis problems in which the entire optomechanical system must be considered. In many analyses, it is desired to consider the effects of diffraction by mechanical edges. Smoothly melding the effects of diffraction, a wave phenomenon, into a ray-tracing program is a significant technical challenge. This paper discusses

Edward R. Freniere; G. Groot Gregory; Richard A. Hassler

We explore in depth two zero-temperature MonteCarlo methods and apply the techniques to models of high temperature superconductors. Variational MonteCarlo provides a basis for comparing t - J model states from the literature to states we develop which capture striped phenomena seen in density matrix renormalization group (DMRG) studies. Green's function MonteCarlo (GFMC) is discussed in detail with special attention to the sources of error in the method that are not statistical in nature: finite numbers of walkers and nodal structure approximations. We find that approximating the nodes can prevent the GFMC from reaching an unbiased ground state. Two signals of this bias, the hole density and spin-spin correlations are presented from simulations on a small cluster and represent conflicting "ground state" properties found by supplying the GFMC with varying nodal structures. Therefore we cannot confirm or refute the existence of stripes in the t - J model from a MonteCarlo standpoint within the parameter ranges relevant to high temperature super-conductivity. We find that the controversy concerning the nature of t - J model ground state can be attributed to this bias in the GFMC method.

The wave function of 8Be, which is obtained from the MonteCarlo shell model (MCSM), is discussed. A method to define an intrinsic state in the MCSM is proposed. The appearance of two-?-cluster structures in 8Be is demonstrated.

We present a simulation algorithm for dynamical fermions that combines the multiboson technique with the hybrid MonteCarlo algorithm. We find that the algorithm gives a substantial gain over the standard methods in practical simulations. We point out the ability of the algorithm to treat fermion zero modes in a clean and controllable manner.

A direct measurement of the gravitational acceleration of antimat- ter has never before been performed. Recently, such an experiment has been proposed, using antihydrogen with an atom interferometer. This paper describes a computer model of the proposed experiment, which is used to test basic assumptions and optimize certain param- eters. A MonteCarlo routine for generation of trial data sets

A MonteCarlo Scheme for the solution of Poisson's equation is presented. The solution technique applies a unique iterative, boundary propagation scheme that uses successive-under-relaxation (SUR). The relaxation scheme adheres to the traditional SOR iteration matrix splitting; however, it is confined to SUR to achieve convergence. Analogously, it is demonstrated that the convergence of the SUR approach is accelerated by

The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a MonteCarlo code that uses representative

We present a MonteCarlo method for the direct evaluation of the difference between the free energies of two crystal structures. The method is built on a lattice-switch transformation that maps a configuration of one structure onto a candidate configuration of the other by ``switching'' one set of lattice vectors for the other, while keeping the displacements with respect to

A. D. Bruce; A. N. Jackson; G. J. Ackland; N. B. Wilding

In this paper, we explore the application of continuum MonteCarlo methods to effective field theory models. Effective field theories, in this context, are those in which a Fock space decomposition of the state is useful. These problems arise both in nucl...

This paper discusses the formulation and implementation of an acceleration approach for the MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation. The MCScene simulation is based on a Direct Simulation MonteCarlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. The model

Steven Richtsmeier; Robert Sundberg; Frank O. Clark

This paper discusses the formulation and implementation of an acceleration approach for the MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation. The MCScene simulation is based on a Direct Simulation MonteCarlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. The model

Steven Richtsmeier; Robert Sundberg; Raymond Haren; Frank O. Clark

Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit MonteCarlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact

This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE MonteCarlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an

The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport MonteCarlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo

The Radiation Safety Information Computational Center (RSICC) is the designated central repository of the United States Department of Energy (DOE) for nuclear software in radiation transport, safety, and shielding. Since the center was established in the early 60’s, there have been several MonteCarlo (MC) particle transport computer codes contributed by scientists from various countries. An overview of the neutron

Issues related to distributed-memory multiprocessing as applied to MonteCarlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine.

MonteCarlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of MonteCarlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few MonteCarlo iterations and the internal energy is measured; mixed-phase runs, where several MonteCarlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)

We present a novel soft-in soft-out (SISO) detection scheme based on Markov-chain Monte-Carlo (MCMC) simulations. The proposed detector is applicable to both synchronous multiuser and multiple-input multiple-output (MIMO) communication systems. Unlike previous publications on the subject, we use MonteCarlo integration technique to arrive at the receiver structure. The proposed multiuser\\/MIMO detector is found to follow the Rao-Blackwell formulation and

The reliability analysis of critical systems is often performed using fault-tree analysis. Fault trees are analyzed using analytic approaches or MonteCarlo simulation. The usage of the analytic approaches is limited in few models and certain kinds of distributions. In contrast to the analytic approaches, MonteCarlo simulation can be broadly used. However, MonteCarlo simulation is time-consuming because of

A MonteCarlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear MonteCarlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple MonteCarlo transport problem. Problems encountered, problems solved, and speculations

Thanks to the dramatic decrease of computer costs and the no less dramatic increase in those same computer's capabilities and also thanks to the availability of specific free software and libraries that allow the set up of small parallel computation installations the scientific community is now in a position where parallel computation is within easy reach even to moderately budgeted research groups. The software package PMCD (Parallel MonteCarlo Driver) was developed to drive the MonteCarlo simulation of a wide range of user supplied models in parallel computation environments. The typical MonteCarlo simulation involves using a software implementation of a function to repeatedly generate function values. Typically these software implementations were developed for sequential runs. Our driver was developed to enable the run in parallel of the MonteCarlo simulation, with minimum changes to the original code that implements the function of interest to the researcher. In this communication we present the main goals and characteristics of our software, together with a simple study its expected performance. MonteCarlo simulations are informally classified as ``embarrassingly parallel'', meaning that the gains in parallelizing a MonteCarlo run should be close to ideal, i.e. with speed ups close to linear. In this paper our simple study shows that without compromising the easiness of use and implementation, one can get performances very close to the ideal.

A hybrid deterministic–stochastic algorithm combining the simplex method (SM) and a genetic algorithm (GA) was applied to\\u000a the problem of extracting the optical and morphological properties of human skin (HSOMPs) from visual reflectance spectroscopy\\u000a data. The results using the GA-SM hybrid algorithm adopting tournament selection and selecting new sets of HSOMPs were compared\\u000a with those using other conventional optimization algorithms

The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S[sub N]) and stochastic (MonteCarlo) methods are applied. The MonteCarlo and S[sub N] regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid MonteCarlo/S[sub N] method provides a new means of solving problems involving both optically thick and optically thin regions that neither MonteCarlo nor S[sub N] is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized MonteCarlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized MonteCarlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the MonteCarlo algorithm may be expected to perform well.

The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (MonteCarlo) methods are applied. The MonteCarlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid MonteCarlo/S{sub N} method provides a new means of solving problems involving both optically thick and optically thin regions that neither MonteCarlo nor S{sub N} is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized MonteCarlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized MonteCarlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the MonteCarlo algorithm may be expected to perform well.

Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry MonteCarlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each MonteCarlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier.

Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of MonteCarlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging MonteCarlo practitioner.

At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos MonteCarlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). MonteCarlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

MonteCarlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. Les codes de calculs MonteCarlo permettent au travers de la simulation des trajectoires de particules dans la matière d',étudier finement tous les paramètres d'une irradiation comme la dose ou les interactions avec les atomes du milieu. Ces caractéristiques sont particulièrement utiles pour les irradiations par neutrons, depuis le développement des machines jusqu'à? la dosimétrie. Les applications de ces codes sont illustrées pour la Thérapie par Captures de neutrons, les Irradiations par Neutrons Rapides, et la Potentialisation par Capture de Neutrons.

Paquis, P.; Pignol, J. P.; Cuendet, P.; Fares, G.; Diop, C.; Iborra, N.; Hachem, A.; Mokhtari, F.; Karamanoukian, D.

We summarize results of quantum MonteCarlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green`s function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum MonteCarlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

We summarize results of quantum MonteCarlo simulations of the degenerate single-impurity Anderson model using the impurity algorithm of Hirsch and Fye. Using methods of Bayesian statistical inference, coupled with the principle of maximum entropy, we extracted the single-particle spectral density from the imaginary-time Green's function. The variations of resulting spectral densities with model parameters agree qualitatively with the spectral densities predicted by NCA calculations. All the simulations were performed on a cluster of 16 IBM R6000/560 workstations under the control of the message-passing software PVM. We described the trivial parallelization of our quantum MonteCarlo code both for the cluster and the CM-5 computer. Other issues for effective parallelization of the impurity algorithm are also discussed.

The Radiation Safety Information Computational Center (RSICC) is the designated central repository of the United States Department of Energy (DOE) for nuclear software in radiation transport, safety, and shielding. Since the center was established in the early 60's, there have been several MonteCarlo particle transport (MC) computer codes contributed by scientists from various countries. An overview of the neutron transport computer codes in the RSICC collection is presented.

Recently, Monte-Carlo Tree Search (MCTS) has advanced the fleld of computer Go substantially. In this article we investigate the application of MCTS for the game Lines of Action (LOA). A new MCTS variant, called MCTS-Solver, has been designed to play narrow tacti- cal lines better in sudden-death games such as LOA. The variant difiers from the traditional MCTS in respect

Mark H. M. Winands; Yngvi Björnsson; Jahn-takeshi Saito

Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous MonteCarlo model.

Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using MonteCarlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.

We present a MonteCarlo method for the direct evaluation of the difference\\u000abetween the free energies of two crystal structures. The method is built on a\\u000alattice-switch transformation that maps a configuration of one structure onto a\\u000acandidate configuration of the other by `switching' one set of lattice vectors\\u000afor the other, while keeping the displacements with respect to

A. D. Bruce; A. N. Jackson; G. J. Ackland; N. B. Wilding

Finding the "right" number of clusters, k, for a dataset is a difficult, and often ill-posed, problem. Ina probabilistic clustering context, likelihood-ratios,penalized likelihoods, and Bayesian techniques areamong the more popular techniques. In this papera new cross-validated likelihood criterion is investigatedfor determining cluster structure. A practicalclustering algorithm based on MonteCarlo crossvalidation (MCCV) is introduced. The algorithm permitsthe data analyst to

We demonstrate that MonteCarlo sampling can be used to efficiently extract the expectation value of projected entangled pair states with a large virtual bond dimension. We use the simple update rule introduced by H. C. Jiang [Phys. Rev. LettPRLTAO0031-900710.1103\\/PhysRevLett.101.090603 101, 090603 (2008)] to obtain the tensors describing the ground state wave function of the antiferromagnetic Heisenberg model and evaluate

We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by MonteCarlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. (Los Alamos National Lab., NM (USA)); Jarrell, M. (Ohio State Univ., Columbus, OH (USA). Dept. of Physics)

This paper proposes a MonteCarlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel’s deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient’s IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient’s entire dose region, respectively. As a 4D MonteCarlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D MonteCarlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.

Behavior of two-phase (water and oil) flow in a liquid unsaturated one-dimensional heterogeneous porous medium was investigated by using the MonteCarlo (MC) technique. Two spatially- correlated stochastic processes, intrinsic permeability, k, and soil retention parameter, a, were used in the analysis under the conditions of deterministic a and stochastic log-k, perfectly correlated log-k and a, and statistically independent (uncorrelated)

A spectral line-by-line (LBL) method is developed for photon MonteCarlo simulations of radiation in participating media. The performance of the proposed method is compared with that of the stochastic full-spectrum k-distribution (FSK) method in both homogeneous and inhomogeneous media, and in both traditional continuum media and media represented by stochastic particle fields, which are frequently encountered in combustion simulations.

In this work we propose a Bayesian approach for the parameter estimation problem of stochastic autoregressive models of order p, AR(p), applied to the streamflow forecasting p roblem. Procedures for model selection, forecasting and robustness evaluation through MonteCarlo Markov Chain (MCMC) simulation techniques are also presented. The proposed approach is compared with the classical one by Box-Jenkins (maximum likelihood

With the advent of powerful computers and parallel processing including Grid technology, the use of MonteCarlo (MC) techniques for radiation transport simu- lation has become the most popular method for modeling radiological imaging systems and particularly X-ray com- puted tomography (CT). The stochastic nature of involved processes such as X-ray photons generation, interaction with matter and detection makes MC

We present a MonteCarlo approach for training partially observable diffusion processes. We apply the approach to diffusion networks, a stochastic version of continuous recurrent neural networks. The approach is aimed at learning probability distributions of continuous paths, not just expected values. Interestingly, the relevant activation statistics used by the learning rule presented here are inner products in the Hilbert

Javier R. Movellan; Paul Mineiro; Ruth J. Williams

Number-theoretic methods (NTM) or quasi-MonteCarlo methods are a class of techniques to generate points of the uniform distribution in the s-dimensional unit cube. NTM is a special method, which represents a combination of number theory and numerical analysis. The uniformly scattered set of points in the unit cube obtained by NTM is usually called a set of quasi-random numbers or a number-theoretic net (NT-net), since it may used instead of random numbers in many statistical problems. NT-net can be defined as representative points of the uniform distribution. There are different criterions to measure uniformity and methods how to generate NT-nets. Theoretically the rate of convergence of the NTM is better when compared to the MonteCarlo method. The high-resolution force-on-force combat simulation is usually modeled as stochasticMonteCarlo type model and discrete event system. In high-resolution MonteCarlo combat simulations a large amount of random numbers has to be generated. In MonteCarlo type combat simulation models every unit has certain probabilities for detecting and affecting each enemy unit at each time interval. Usually MonteCarlo method is used to calculate expected value of some property of the model. This is matter of numerical integration with MonteCarlo method. In this paper the effectiveness of NTM's are compared with MonteCarlo method in simulated high-resolution combat simulation case. Some methods how to generate NT-nets are introduced. The estimates of NTM and MonteCarlo simulations are studied by comparing statistical properties of the estimates.

This article considers how to estimate Bayesian credible and highest probability density (HPD) intervals for parameters of interest and provides a simple MonteCarlo approach to approximate these Bayesian intervals when a sample of the relevant parameters can be generated from their respective marginal posterior distribution using a Markov chain MonteCarlo (MCMC) sampling algorithm. We also develop a Monte

Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose MonteCarlo.

Quantum MonteCarlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum MonteCarlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the MonteCarlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

Brown, W.R. [Univ. of California, Berkeley, CA (United States). Chemistry Dept.]|[Lawrence Berkeley National Lab., CA (United States). Chemical Sciences Div.

A MonteCarlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear MonteCarlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple MonteCarlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs.

Historically, MonteCarlo variance reduction techniques have been developed one at a time in response to calculational needs. This paper reports that the theoretical basis is provided for obtaining unbiased MonteCarlo estimates from all possible combinations of variance reduction techniques. Hitherto, the techniques have not been proven to be unbiased in arbitrary combinations. The authors are unaware of any MonteCarlo techniques (in any linear process) that are not treated by the theorem herein.

Booth, T.E.; Pederson, S.P. (Los Alamos National Lab., Los Alamos, NM (US))

The authors consider the stationary-phase MonteCarlo method and a variety of related approaches. The stationary-phase MonteCarlo method is aimed at the generic problem of performing high-dimensional integrations of rapidly oscillatory integrands. Real time numerical path integration is one important class of applications where such problems arise. They examine the relationship between the stationary-phase MonteCarlo approach and the recent work of Makri and Miller and Filinov.

MonteCarlo simulations are nowadays an essential tool in emission tomography (Single-Photon Emission Computed Tomography—SPECT and Positron Emission Tomography—PET), for assisting system design and optimizing imaging and processing protocols. Several MonteCarlo simulation software are currently available for modeling SPECT and PET configurations.This paper presents an overview of current trends concerning MonteCarlo simulations in SPECT and PET. The evolution

This paper examines the use of quasirandom sequences of points in place of pseudorandom points in MonteCarlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-MonteCarlo Method{close_quotes}) than when a standard MonteCarlo calculation is performed using only pseudorandom points.

Discrete-ordinates and MonteCarlo techniques are compared for solving integrodifferential equations and compare their relative adaptability to vector processors. The author discusses the utility of multiprocessors for MonteCarlo calculations and describes a simple architecture (the monodirectional edge-coupled array or MECA) that seems ideally suited to MonteCarlo and overcomes many of the packaging problems associated with more general multiprocessors. 18 refs., 3 figs., 1 tab.

The Monte-Carlo Tree Search algorithm has been successfully applied in various domains. However, its performance heavily depends on the Monte-Carlo part. In this paper, we propose a generic way of improving the Monte-Carlo simulations by using RAVE values, which already strongly improved the tree part of the algorithm. We prove the generality and efficiency of our approach by showing improvements on two different applications: the game of Havannah and the game of Go.

Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based MonteCarlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

Monte-Carlo Tree Search (MCTS) is a successful algorithm used in many state of the art game engines. We propose to improve a MCTS solver when a game has more than two outcomes. It is for example the case in games that can end in draw positions. In this case it improves significantly a MCTS solver to take into account bounds on the possible scores of a node in order to select the nodes to explore. We apply our algorithm to solving Seki in the game of Go and to Connect Four.

The dynamic MonteCarlo Renormalization group method introduced by Jan, Moseley, and Stauffer is used to determine the dynamic exponent of the Ising model with conserved magnetization in two dimensions. The authors present an explicit theoretical basis for the method and expand on the original results for the Kawasaki model. The new result clearly demonstrates the validity of the method and the value of the dynamic exponent, z = 3.79 {plus minus} 0.05, supports the conclusion of Halperin, Hohenberg, and Ma.

Moseley, L.L.; Gibbs, P.W. (Univ. of West Indies, St. Michael (Barbados)); Jan, N. (Univ. of West Indies, St. Michael (Barbardos) St. Francis Xavier Univ., Antigonish, Nova Scotia (Canada))

A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence MonteCarlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

Quantum MonteCarlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 40 different (J{pi}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

A kinetic MonteCarlo simulation of dislocation motion is introduced. The dislocations are assumed to be composed of pure edge and screw segments confined to a fixed lattice. The stress and temperature dependence of the dislocation velocity is studied, and finite-size effects are discussed. It is argued that surfaces and boundaries may play a significant role in the velocity of dislocations. The simulated dislocations are shown to display kinetic roughening according to the exponents predicted by the Kardar-Parisi-Zhang equation.

A quantum mechanical MonteCarlo method has been used for the treatment of molecular problems. The imaginary-time Schroedinger equation written with a shift in zero energy (E/sub T/ - V(R)) can be interpreted as a generalized diffusion equation with a position-dependent rate or branching term. Since diffusion is the continuum limit of a random walk, one may simulate the Schroedinger equation with a function psi (note, not psi/sup 2/) as a density of ''walks.'' The walks undergo an exponential birth and death as given by the rate term. 16 refs., 2 tabs.

Quantum MonteCarlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 30 different (j{sup {prime}}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

Quantum MonteCarlo calculations of ground and low-lying excited states for nuclei with A {le} 8 have been made using a realistic Hamiltonian that fits NN scattering data. Results for more than two dozen different (J{sup {pi}}, T) p-shell states, not counting isobaric analogs, have been obtained. The known excitation spectra of all the nuclei are reproduced reasonably well. Density and momentum distributions and various electromagnetic moments and form factors have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

Wiringa, R.B. [Argonne National Lab., IL (United States). Physics Div.

Reduced density matrices are a powerful construct in quantum chemistry, providing a compact representation of highly multi-determinantal wavefunctions, from which the expectation values of important physical properties can be extracted, including multipole moments, polarizabilities and nuclear forces^1,2. Full configuration interaction quantum MonteCarlo (FCIQMC)^3 and its initiator extension (i-FCIQMC)^4 perform a stochastic propagation of signed walkers within a space of Slater determinants to achieve FCI-quality energies without the need to store the complete wavefunction. We present here a method for a stochastic calculation of the 1- and 2-body reduced density matrices within the framework of (i)-FCIQMC, and apply this formulation to a range of archetypal molecular systems. Consideration is also given to the source and nature of systematic and stochastic error, and regimes to effectively alleviate these errors are discussed^5. ^1 P.-O. L"owdin, Phys. Rev. 97, 1474 (1955). ^2 C. A. Coulson, Rev. Mod. Phys. 32, 170 (1960). ^3 G. H. Booth, A. Thom, and A. Alavi, J. Chem. Phys. 131, 054106 (2009). ^4 D. Cleland, G. H. Booth, and A. Alavi, J. Chem. Phys. 132, 041103 (2010). ^5 D. Cleland, PhD thesis, University of Cambridge, 2012.

Overy, Catherine; Cleland, Deidre; Booth, George H.; Shepherd, James J.; Alavi, Ali

Current methods and difficulties in MonteCarlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, MonteCarlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup MonteCarlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with MonteCarlo applications.

We present a MonteCarlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and MonteCarlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and MonteCarlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the MonteCarlo predictions.

Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.

The issue of fission source convergence in MonteCarlo eigenvalue calculations is of interest because of the potential consequences of erroneous criticality safety calculations. In this work, the authors compare two different techniques to improve the source convergence behavior of standard MonteCarlo calculations applied to challenging source convergence problems. The first method, super-history powering, attempts to avoid discarding important fission sites between generations by delaying stochastic sampling of the fission site bank until after several generations of multiplication. The second method, stratified sampling of the fission site bank, explicitly keeps the important sites even if conventional sampling would have eliminated them. The test problems are variants of Whitesides' Criticality of the World problem in which the fission site phase space was intentionally undersampled in order to induce marginally intolerable variability in local fission site populations. Three variants of the problem were studied, each with a different degree of coupling between fissionable pieces. Both the superhistory powering method and the stratified sampling method were shown to improve convergence behavior, although stratified sampling is more robust for the extreme case of no coupling. Neither algorithm completely eliminates the loss of the most important fissionable piece, and if coupling is absent, the lost piece cannot be recovered unless its sites from earlier generations have been retained. Finally, criteria for measuring source convergence reliability are proposed and applied to the test problems.

MonteCarlo techniques have become popular in different areas of medical physics with advantage of powerful computing systems. In particular, they have been extensively applied to simulate processes involving random behavior and to quantify physical parameters that are difficult or even impossible to calculate by experimental measurements. Recent nuclear medical imaging innovations such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and multiple emission tomography (MET) are ideal for MonteCarlo modeling techniques because of the stochastic nature of radiation emission, transport and detection processes. Factors which have contributed to the wider use include improved models of radiation transport processes, the practicality of application with the development of acceleration schemes and the improved speed of computers. In this paper we present a derivation and methodological basis for this approach and critically review their areas of application in nuclear imaging. An overview of existing simulation programs is provided and illustrated with examples of some useful features of such sophisticated tools in connection with common computing facilities and more powerful multiple-processor parallel processing systems. Current and future trends in the field are also discussed. PMID:10227362

Matrix-product states, generated by the density-matrix renormalization group method, are among the most powerful methods for simulation of quasi-one dimensional quantum systems. Direct application of a matrix-product state representation fails for two dimensional systems, although a number of tensor-network states have been proposed to generalize the concept for two dimensions. We introduce a useful approximate method replacing a 4-index tensor by two matrices in order to contract tensors in two dimensions. We use this formalism as a basis for variational quantum MonteCarlo, optimizing the matrix elements stochastically. We present results on a two dimensional spinless fermion model including nearest- neighbor Coulomb interactions, and determine the critical Coulomb interaction for the charge density wave state by finite size scaling.

A new quantum MonteCarlo approach is proposed to investigate low-lying states of nuclei within the shell model. The formalism relies on a variational symmetry-restored wave function to guide the underlying Brownian motion. Sign or phase problems that usually plague quantum MonteCarlo fermionic simulations are controlled by constraining stochastic paths through a fixed-node-like approximation. Exploratory results in the sd and pf valence spaces with realistic effective interactions are presented. They prove the ability of the scheme to yield nearly exact yrast spectroscopies for both even- and odd-mass nuclei.

MCNPX (MonteCarlo N-Particle eXtended) is a general-purpose MonteCarlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

Accurate simulation of large electron fields may lead to improved accuracy in MonteCarlo treatment planning while simplifying the commissioning procedure. We have used measurements made with wide-open jaws and no electron applicator to adjust simulation parameters. Central axis depth dose curves and profiles of 6-21 MeV electron beams measured in this geometry were used to estimate source and geometry parameters, including those that affect beam symmetry: incident beam direction and offset of the secondary scattering foil and monitor chamber from the beam axis. Parameter estimation relied on a comprehensive analysis of the sensitivity of the measured quantities, in the large field, to source and geometry parameters. Results demonstrate that the EGS4 MonteCarlo system is capable of matching dose distributions in the largest electron field to the least restrictive of 1 cGy or 1 mm, with Dmax of 100 cGy, over the full energy range. This match results in an underestimation of the bremsstrahlung dose of 10-20% at 15-21 MeV, exceeding the combined experimental and calculational uncertainty in this quantity of 3%. The simulation of electron scattering at energies of 15-21 MeV in EGS4 may be in error. The recently released EGSnrc/BEAMnrc system may provide a better match to measurement.

A state-of-the-art Monte-Carlo computer simulation of the space radiation environment using particle transport codes from CERN and INFN (Italy) is described. Spacecraft subject to space radiation are visualized much like a detector in an accelerator beamline. Standard software techniques simulate the evolution of particle cascades through an accurate isotopic-compositional model of the vehicle. The simulation uses the latest known results in low-energy and high-energy physics derived from an architecture called AliRoot structured about a Virtual MonteCarlo whose transport engines are FLUKA and GEANT4 [1]. The output is a detailed depiction of the total space radiation environment, including the secondary albedoes produced. The neutron albedo is of particular concern. Beyond doing the physics transport of incident flux using FLUKA, the simulation provides a self-contained stand-alone object-oriented analysis and visualization infrastructure. The latter is known as ROOT, recently adopted for CDF at Fermilab. Complex spacecraft geometries are represented by aerospace finite element models (FEMs) which readily lend themselves to CAD (Computer-Aided Design) analysis. [1] Brun, R., Carminati, F., & Rademakers, F., in Proc. Int'l. Conf. Computing High-Energy and Nuclear Physics, CHEP (2000).

Lithium was chosen as the simplest known metal for the first application of quantum MonteCarlo methods in order to evaluate the accuracy of conventional one-electron band theories. Lithium has been extensively studied using such techniques. Band theory calculations have certain limitations in general and specifically in their application to lithium. Results depend on such factors as charge shape approximations (muffin tins), pseudopotentials (a special problem for lithium where the lack of rho core states requires a strong pseudopotential), and the form and parameters chosen for the exchange potential. The calculations are all one-electron methods in which the correlation effects are included in an ad hoc manner. This approximation may be particularly poor in the high compression regime, where the core states become delocalized. Furthermore, band theory provides only self-consistent results rather than strict limits on the energies. The quantum MonteCarlo method is a totally different technique using a many-body rather than a mean field approach which yields an upper bound on the energies. 18 refs., 4 figs., 1 tab.

Giant moments of several Bohr magnetons are formed in transition metal alloys where the matrix is palladium or platinum. The interaction between these giant moments produces a phase transition from paramagnetism to ferromagnetism, when the alloy is below the Curie temperature. These giant moments can be measured mainly by neutron diffraction, although several characteristics can be determined by magnetization measurements. In this work, several magnetic properties of these alloys are presented, based on calculations made mainly by MonteCarlo simulation of these properties. A localized moment model is used to simulate the formation of magnetization clouds and thier transformation as the temperature is raised. The simulation allows the calculation of the critical temperatures of ferromagnetism, which are then compared with experimental measurements. In several of these alloys, unpolarized diffuse neutron scattering measurements show large forward peaks that would indicate giant moments larger than those obtained by magnetization measurements. We calculated, using MonteCarlo simulation, the diffuse neutron cross sections for these alloys and reproduced the neutron data. We find a significant quasielastic contribution to the scattering that can not be attributed to the magnetization cloud. The calculation methods were applied to several dilute and concentrated transition metal alloys. The results indicate that the methods and models used are valid for a large group of Pd and Pt based alloys.

Experimental implementations of quantum information processing have now reached a state, at which quantum process tomography starts to become impractical, since the number of experimental settings as well as the computational cost of the post processing required to extract the process matrix from the measurements scales exponentially with the number of qubits in the system. In order to determine the fidelity of an implemented process relative to the ideal one, a more practical approach called MonteCarlo quantum process certification was proposed in Ref. [1]. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup. Our system is realized with three superconducting transmon qubits coupled to a coplanar microwave resonator which is used for the joint-readout of the qubit states. We demonstrate an implementation of MonteCarlo quantum process certification and determine the fidelity of different two- and three-qubit gates such as cphase-, cnot-, 2cphase- and Toffoli-gates. The obtained results are compared with the values obtained from conventional process tomography and the errors of the obtained fidelities are determined. [4pt] [1] M. P. da Silva, O. Landon-Cardinal and D. Poulin, arXiv:1104.3835(2011)

Steffen, Lars; Fedorov, Arkady; Baur, Matthias; Palmer da Silva, Marcus; Wallraff, Andreas

Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These MonteCarlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The MonteCarlo analysis will be compared with the predictions of wave theory.

When designing new materials it is important to have an accurate measure of the material's formation energy to assess thermodynamic stability and chemical activity. Computational materials science holds the potential to accurately predict formation energies, but widely-used methods such as density functional theory often yield large errors when calculating energy differences between compounds with significantly different electronic structures. More accurate quantum chemical methods tend to scale poorly with system size, making it infeasible to apply them to many materials. One exception is quantum MonteCarlo (QMC), which effectively scales linearly or better with system size when calculating formation energy per atom. QMC scales perfectly with the number of processors, making it ideally positioned to take advantage of the rapidly growing core count in central and graphics processing units. It has been shown that quantum MonteCarlo can successfully predict formation energies for some solid state materials, but a broad assessment has been lacking. We have run QMC calculations on a variety of different materials for which high-quality experimental data exists. We present data on the cost and accuracy of QMC, providing insight into the role QMC will play in materials design.

The diffusion quantum MonteCarlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of MonteCarlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

We present a MonteCarlo method for the direct evaluation of the difference between the free energies of two crystal structures. The method is built on a lattice-switch transformation that maps a configuration of one structure onto a candidate configuration of the other by ``switching'' one set of lattice vectors for the other, while keeping the displacements with respect to the lattice sites constant. The sampling of the displacement configurations is biased, multicanonically, to favor paths leading to gateway arrangements for which the MonteCarlo switch to the candidate configuration will be accepted. The configurations of both structures can then be efficiently sampled in a single process, and the difference between their free energies evaluated from their measured probabilities. We explore and exploit the method in the context of extensive studies of systems of hard spheres. We show that the efficiency of the method is controlled by the extent to which the switch conserves correlated microstructure. We also show how, microscopically, the procedure works: the system finds gateway arrangements which fulfill the sampling bias intelligently. We establish, with high precision, the differences between the free energies of the two close packed structures (fcc and hcp) in both the constant density and the constant pressure ensembles.

Bruce, A. D.; Jackson, A. N.; Ackland, G. J.; Wilding, N. B.

During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called MonteCarlo.1 Further investigation led me to the MonteCarlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum MonteCarlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum MonteCarlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum MonteCarlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13 published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and MonteCarlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling MonteCarlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.

MonteCarlo simulations are nowadays an essential tool in emission tomography (Single-Photon Emission Computed Tomography—SPECT and Positron Emission Tomography—PET), for assisting system design and optimizing imaging and processing protocols. Several MonteCarlo simulation software are currently available for modeling SPECT and PET configurations. This paper presents an overview of current trends concerning MonteCarlo simulations in SPECT and PET. The evolution of the place of MonteCarlo simulations in SPECT and PET since 1995 is studied, together with the evolution of the codes used for MonteCarlo simulations. New features present in current codes are described, and new applications of MonteCarlo simulations in SPECT and PET are reviewed. Finally, upcoming developments in the field of MonteCarlo simulations in SPECT or PET are discussed. In this paper, a particular emphasis is given to the GATE code, as it is the most recent and publicly available code for MonteCarlo simulations appropriate for both SPECT and PET applications.

|A new test assembly algorithm based on a MonteCarlo random search is presented in this article. A major advantage of the MonteCarlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…

We have parallelized the PENELOPE MonteCarlo particle transport simulation package (1). The motivation is to increase efficiency of MonteCarlo simulations for medical applications. Our parallelization is based on the standard MPI message passing interface. The parallel code is especially suitable for a distributed memory environment, and has been run on up to 256 processors on the Indiana University

Quantum MonteCarlo (QMC) methods such as variational and diffusion MonteCarlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multislater- Jastrow wave function which consists of a Jastrow function multiplied

Miguel A. Morales; Bryan K. Clark; Jeremy McMinis; Jeongnim Kim; Gustavo Scuseria

We discuss a novel strategy for training neural networks using sequential MonteCarlo algorithms and propose a new hybrid gradient descent\\/sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential MonteCarlo techniques. The new algorithm may be viewed as a global optimization strategy that allows us to

João F. G. De Freitas; Mahesan Niranjan; Andrew H. Gee; Arnaud Doucet

The question addressed in this paper is whether it is possible to model in a fully self-consistent way a thermal plasma ion population by a finite number of test particles of a MonteCarlo model. This MonteCarlo model for the guiding centre drift motion has been developed to study collisional ion transport in an axisymmetric tokamak equilibrium. The model

The simulation of photon transport in the atmosphere with the MonteCarlo method forms part of the EURASEP-programme. The specifications for the problems posed for a solution were such that the direct application of the analogue MonteCarlo method was not...

MonteCarlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of MonteCarlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed

The MonteCarlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the MonteCarlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise

In this study, we introduce a method to determine the energy spectrum delivered by a medical accelerator. The method relies on both MonteCarlo generated data and experimental measurements, but requires far fewer measurements than current attenuation-based methods, and much less information about the construction of the linear accelerator than full MonteCarlo based estimations, making it easy to perform

The success of MonteCarlo tree search (MCTS) in many games, where ??-based search has failed, naturally raises the question whether MonteCarlo simulations will eventually also outperform traditional game-tree search in game domains where ?? -based search is now successful. The forte of ??-based search are highly tactical deterministic game domains with a small to moderate branching factor, where

Mark H. M. Winands; Yngvi Björnsson; Jahn-Takeshi Saito

The kinetic MonteCarlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic MonteCarlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several MonteCarlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

Garcia Cardona, Cristina (San Diego State University); Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander (U. S. Department of Energy, NNSA); Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan

With advances in algorithms and growing computing powers, quantum MonteCarlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.

Esler, Kenneth P [ORNL; Mcminis, Jeremy [University of Illinois, Urbana-Champaign; Morales, Miguel A [Lawrence Livermore National Laboratory (LLNL); Clark, Bryan K. [Princeton University; Shulenburger, Luke [Sandia National Laboratory (SNL); Ceperley, David M [ORNL

Accurate simulation of diagnostics for thermonuclear burn requires detailed modeling of the spatial and energy distributions of particle sources, in-flight reaction kinematics, and Doppler effects. In the ALE multiphysics code HYDRA, this is now achieved using a new MonteCarlo particle transport package based on LLNL's Arrakis library. It tracks neutrons, gammas, and light ions on 2D quadrilateral and 3D hexahedral meshes. Neutrons and gammas track using the latest LLNL nuclear data; light ions undergo continuous slowing down with corrections for Fermi degeneracy, small angle Coulomb deflections at track end points, nuclear collisions, and direct Coulomb collisions with plasma ions. The package agrees well with idealized analytical problems as well as high resolution diffusion burn ICF capsule and hohlraum simulations as shown and achieves run times commensurate with production requirements. An overview of the charged particle physics models used is given.

Sepke, S. M.; Patel, M. V.; Marinak, M. M.; McKinley, M. S.; O'Brien, M. J.; Procassini, R. J.

We apply a recently developed quantum MonteCarlo (QMC) method (Shiwei Zhang, Henry Krakauer, Phys. Rev. Lett. 90). 136401 (2003). to calculate the atomization energy of sulfur molecule and the ionization energies of sulfur atom. The QMC method projects out the ground state by random walks in the space of Slater determinants, using auxiliary-fields to decouple the Coulomb interaction between electrons. A trial wave function |?_T> is used in the approximation to control the phase problem in QMC. We carry out Hartree-Fock (HF) and density functional theory (with the local density approximation (LDA)) calculations. The generated single Slater determinant wave functions are then used as |?_T> in QMC. The HF and LDA |?_T>'s lead to atomization energies in agreement with each other and the experimental value.

This paper presents MonteCarlo simulation results for the formation of modulated phases in the framework of the two dimensional ANNNI model with a nonconserved order parameter. This work complements the earlier studies of Kaski, et al. by examining a different, wider area of parameter space and temperature. Like Kaski, et al., it is found that for certain temperatures and values of the frustration parameter, kappa, ordered domains form quickly and the correlation length grows as the square root of time. However, there exists a range of kappa for which a quench from high to low temperature results in the formation of a metastable glassy phase. In addition to the ANNNI model study, preliminary results are presented on a newly developed model which exhibits phase modulation due to the presence of elastic interactions between the different phase and with an externally applied stress. 12 refs., 12 figs.

Accurate numerical solution of the five-body Schrödinger equation is effected via variational MonteCarlo calculations. The spectrum is assumed to exhibit a narrow resonance with strangeness S = +1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed nonrelativistic Hamiltonian, which has spin, isospin, and color dependent pair interactions and many-body confining terms, which are fixed by the nonexotic spectra. Gauge field dynamics are modeled via flux-tube exchange factors. The energy determined for the ground states with J(pi) = (1/2)- ((1/2)+) is 2.22 (2.50) GeV. A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content. PMID:16384049

Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. MonteCarlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.

This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE MonteCarlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)

The size of the population of random walkers required to obtain converged estimates in diffusion MonteCarlo (DMC) increases dramatically with system size. We illustrate this by comparing ground state energies of small clusters of parahydrogen (up to 48 molecules) computed by DMC and path integral ground state (PIGS) techniques. We contend that the bias associated with a finite population of walkers is the most likely cause of quantitative numerical discrepancies between PIGS and DMC energy estimates reported in the literature, for this few-body Bose system. We discuss the viability of DMC as a general-purpose ground state technique, and argue that PIGS, and even finite temperature methods, enjoy more favorable scaling, and are therefore a superior option for systems of large size.

Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical MonteCarlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.

Stellar intensity interferometers will achieve stellar imaging with a tenth of a milli-arcsecond resolution in the optical band by taking advantage of the large light collecting area and broad range of intertelescope distances offered by future gamma-ray Air Cherenkov Telescope (ACT) arrays. Up to now, studies characterizing the capabilities of intensity interferometers using ACTs have not accounted for realistic effects such as telescope mirror extension, detailed photodetector time response, excess noise and night sky contamination. In this paper, we present the semiclassical quantum optics MonteCarlo simulation we developed in order to investigate these experimental limitations. In order to validate the simulation algorithm, we compare our first results to models for sensitivity and signal degradation resulting from mirror extension, pulse shape, detector excess noise and night sky contamination.

A code package consisting of the MonteCarlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.

Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data postprocessing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data are compared directly with an ideal process using MonteCarlo sampling. Here, we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of 2-qubit gates, such as the CPHASE and the CNOT gate, and 3-qubit gates, such as the Toffoli gate and two sequential CPHASE gates.

Steffen, L.; da Silva, M. P.; Fedorov, A.; Baur, M.; Wallraff, A.

The state-of-the-art for MonteCarlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

Accurate numerical solution of the five-body Schrodinger equation is effected via variational MonteCarlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.

Fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a MonteCarlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP. 32 refs., 12 figs., 1 tab.

Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V. (Colorado State Univ., Fort Collins, CO (USA). Computer Center; Los Alamos National Lab., NM (USA); Supercomputing Research Center, Bowie, MD (USA))

We have applied the technique of evaluating a nonlocal pseudopotential with a trial function to give an approximate, local many-body pseudopotential which was used in a valence-only diffusion MonteCarlo (DMC) calculation. The pair and triple correlation terms in the trial function have been carefully optimized to minimize the effect of the locality approximation. We discuss the accuracy and computational demands of the nonlocal pseudopotential evaluation for the DMC method. Calculations of Si, Sc, and Cu ionic and atomic states and the Si{sub 2} dimer are reported. In most cases {similar to}90% of the correlation energy was recovered at the variational level and excellent estimations of the ground state energies were obtained by the DMC simulations. The small statistical error allowed us to determine the quality of the assumed pseudopotentials by comparison of the DMC results with experimental values.

Mitas, L. (University of Illinois at Urbana-Champaign, Urbana, Illinois (USA). Department of Physics); Shirley, E.L. (Illinois Univ., Urbana, IL (USA). Materials Research Lab. Illinois Univ., Urbana, IL (USA). Dept. of Physics); Ceperley, D.M. (Illinois Univ., Urbana, IL (USA). Center for Supercomputing Research and Development University of Illinois at Urbana-Champaign, Urbana, Illinois (USA). Department of Physics)

Improvements in the modeling of radiation in low density shock waves with direct simulation MonteCarlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available experimental data. The new method is compared with the current modeling technique and both techniques are compared with available data. The differences in the results are evaluated. The test case is based on an AVCO-Everett shock tube experiment, a 10-km/s standing shock wave in air at 0.1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to different flow conditions.

The authors are developing a parallel C++ Implicit MonteCarlo code in the Draco framework. As a background and motivation for the parallelization strategy, they first present three basic parallelization schemes. They use three hypothetical examples, mimicking the memory constraints of the real world, to examine characteristics of the basic schemes. Next, they present a two-step scheme proposed by Lawrence Livermore National Laboratory (LLNL). The two-step parallelization scheme they develop is based upon LLNL`s two-step scheme. The two-step scheme appears to have greater potential compared to the basic schemes and LLNL`s two-step scheme. Lastly, they explain the code design and describe how the functionality of C++ and the Draco framework assist the development of a parallel code.

Neutron detectors are simulated using MonteCarlo methods in order to gain insight into how they work and optimize their performance. Simulated results for a Micromegas neutron beam monitor using a custom computer code are compared with published experimental data to verify the accuracy of the simulation. Different designs (e.g. neutron converter material, gas chamber width, gas pressure) are tested to assess their impact on detector performance. It is determined that a 10B converter foil and 1 mm drift gap width work best for a neutron beam monitor. The Micromegas neutron beam monitor neutronics are evaluated using the computer code MCNP. An optimized set of design criteria are determined that minimize neutron scattering probability in the device. In a best-case scenario, the thermal neutron scattering probability in the detector is 1.1*10-3. Lastly, composite neutron scintillators consisting of fluorescent dopant particles in a lithiated matrix material are simulated using a custom MonteCarlo code. The effects of design parameters such as dopant particle size, dopant volumetric concentration, and dopant and matrix material densities on scintillator characteristics are quantified. For ZnS:Ag particles in a lithiated glass matrix, it is found that dopant particle radii of 1 micron or less result in approximately Gaussian-shaped pulse height spectra and dopant particle radii of 5 microns or less result in practically all neutron absorption events producing scintillation light emission. Self-absorption of scintillation light is not treated in the simulation. Both the Micromegas and composite neutron scintillator simulations use the TRIM code as a heavy-charged particle transport engine.

Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix MonteCarlo (RMMC) method based on analog MonteCarlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog MonteCarlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)

Li, Z.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ., Beijing, 100084 (China)

If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both MonteCarlo and deterministic methods. The third method is a hybrid MonteCarlo method that also converges for difficult problems where the unaccelerated MonteCarlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and MonteCarlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the MonteCarlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid MonteCarlo method weds MonteCarlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

The history of MonteCarlo methods is closely linked to that of computers: The first known MonteCarlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for MonteCarlo In 1957; MonteCarlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, MonteCarlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of MonteCarlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt MonteCarlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

This paper will focus on the calculations of the HTR-10's first criticality. Continuous energy MonteCarlo transport code TRIPOLI-4.3, was used for getting the initial fuel loading of the HTR-10. The calculations have been performed on the basis of different treatment of the particles and the pebbles arrangement. The stochastic distribution of the particles in the fuel zone has been

Hong CHANG; Xavier RAEPSAET; Frederic DAMIAN; Yi-Kang LEE; Oliver Koberl; Xingqing JING; Yongwei YANG

We show that the formalism of tensor-network states, such as the matrix-product states (MPS), can be used as a basis for variational quantum MonteCarlo simulations. Using a stochastic optimization method, we demonstrate the potential of this approach by explicit MPS calculations for the transverse Ising chain with up to N=256 spins at criticality, using periodic boundary conditions and D×D

MonteCarlo simulations for particle and ?-ray emissions from a compound nucleus based on the Hauser-Feshbach statistical theory are performed. The MonteCarlo method is applied to the neutron induced nuclear reactions on 56Fe, and the results are compared with a traditional deterministic method. The neutron and ?-ray emission correlation is examined by gating on an 847 ke ?-ray that is produced by an inelastic scattering process. The partial ?-ray energy spectra for different ?-ray multiplicities is inferred using this MonteCarlo method.

Kawano, T.; Talou, P.; Chadwick, M. B.; Watanabe, T.

Dynamic wedges generated by moving one set of independent jaw are equipped with modern Linac and are used in routine phone-beam radiotherapy. This work studies the dosimetric characteristics of dynamic wedges (DW) using a MonteCarlo technique developed presently based on EGS4 BEAM system. The exact geometry of a linac was simulated. The calculation of DW was accomplished by weighting the incidence electron fluence with the STT. The calculation was validated by the measurement. It was found that the calculated depth doses and beam profiles agreed within 2% with the measurements. Calculations were performed for DW with wedge angle ranging from 15° to 60° for both 6 and 18 MV photon beams in whole range of field sizes. To compare dosimetric differences between physical wedges (PW) and DW, calculations were also carried out for PW under the same conditions as DW. Our calculation reveals that the effects of a dynamic wedge on beam spectral and angular distributions, as well as electron contamination are much less significant as compared with a physical wedge. For the 6-MV photon beam, a 45°PW can result in a 30% increase in mean photon energy due to the effect of beam hardening. It also can introduce a 5% dose reduction in the built up region due to the physical-wedge filtration of contaminated electrons. Neither this mean-energy increase nor such dose reduction is found for a dynamic wedge. Field size dependence of DW dosimetry was also investigated thoroughly. The calculated DW factors agree with the measurements within +/-2%. Both the calculated and measured DW factors are significantly dependent on the field size. The mean photon energy was reduced by 12.3% as the field size increases from 4 x 4 to 20 x 20 cm2 for both DW and open fields. The dose in the build up region is increased up to 10% (from 4 x 4 to 20 x 20 cm2) due to the increase of the contaminated electrons of the large field size. Our study demonstrates that the MonteCarlo method is a useful tool to study the dosimetry of a dynamic wedge. The data presented in this work provide important information for treatment planning involving dynamic wedge.

The Implicit MonteCarlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and the accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.

Brown, F. B. (Forrest B.); Martin, W. R. (William R.)

We have studied the intermolecular interaction between neurofilaments (NFs) using MonteCarlo simulation methods. NFs are assembled from three distinct molecular weight proteins (NF-L, NF-M, NF-H) that are bound to each other laterally forming 10 nm diameter \\bthorn lamentous rods along with side-arm extensions. The molecular model consists of two neuro\\bthorn lament backbones along with sidearm extensions that are distributed according to the stoichiometry of the three subunits. The side arms are modeled at amino acid resolution with each amino acid represented by a hard sphere along with the corresponding charge valence. In our previous studies of a single NF brush, we have found that NF-M is most responsible for the neurofilament protrusion. In this study, we discuss the structural properties such as density profiles and mean-square radius of gyration of each type of side arms as a function of the inter-filament distance. Unlike conventional belief that crossbridging by NF-H side chains between the neurofilaments would be formed, we have only found repulsive interaction between the two neurofilaments.

The classical trajectory MonteCarlo (CTMC) method originated with Hirschfelder, who studied the H + D2 exchange reaction using a mechanical calculator [58.1]. With the availability of computers, the CTMC method was actively applied to a large number of chemical systems to determine reaction rates, and final state vibrational and rotational populations (see, e.g., Karplus et al. [58.2]). For atomic physics problems, a major step was introduced by Abrines and Percival [58.3] who employed Kepler's equations and the Bohr-Sommerfield model for atomic hydrogen to investigate electron capture and ionization for intermediate velocity collisions of H+ + H. An excellent description is given by Percival and Richards [58.4]. The CTMC method has a wide range of applicability to strongly-coupled systems, such as collisions by multiply-charged ions [58.5]. In such systems, perturbation methods fail, and basis set limitations of coupled-channel molecular- and atomic-orbital techniques have difficulty in representing the multitude of activeexcitation, electron capture, and ionization channels. Vector- and parallel-processors now allow increasingly detailed study of the dynamics of the heavy projectile and target, along with the active electrons.

Satellite measurements of the atmosphere are non-direct and therefore the data pro- cessing requires inverse methods. In this paper we apply the Bayesian approach and use the Markov chain MonteCarlo (MCMC) method for solving the retrieval problem of GOMOS mesurements. With the MCMC method we are able to compute the true nonlinear posterior distribution of the solution without linearizing the problem. The MCMC technique can easily be implemented in a great variety of retrieval prob- lems including nonlinear problems with various prior or noise structures. Therefore, MCMC methods, though somewhat slow for operational processing of large amounts of data, provide excellent tools for development and validation purposes. Moreover, when the signal-to-noise ratio is poor the MCMC methods can be used to find even the faintest fingerprints of the absorbers in the signal. The MCMC methods, and especially the reversible jump MCMC can also be used in problems where the dimension of the model space is unknown. We will discuss the possibility of using MCMC approach also in a model selection problem, namely, for choosing the model for the wavelength dependence of the aerosol cross sections and studying the optimal constituent set to be retrieved.

GENIE [1] is a new neutrino event generator for the experimental neutrino physics community. The goal of the project is to develop a ‘canonical’ neutrino interaction physics MonteCarlo whose validity extends to all nuclear targets and neutrino flavors from MeV to PeV energy scales. Currently, emphasis is on the few-GeV energy range, the challenging boundary between the non-perturbative and perturbative regimes, which is relevant for the current and near future long-baseline precision neutrino experiments using accelerator-made beams. The design of the package addresses many challenges unique to neutrino simulations and supports the full life-cycle of simulation and generator-related analysis tasks. GENIE is a large-scale software system, consisting of ˜120000 lines of C++ code, featuring a modern object-oriented design and extensively validated physics content. The first official physics release of GENIE was made available in August 2007, and at the time of the writing of this article, the latest available version was v2.4.4.

A ubiquitous problem in atomic-scale simulation of materials is the small-barrier problem, in which the free-energy landscape presents ``superbasins'' with low intra-basin energy barriers relative to the inter-basin barriers. Rare-event simulation methods, such as kinetic MonteCarlo (KMC) and accelerated molecular dynamics, are inefficient for such systems because considerable effort is spent simulating short-time, intra-basin motion without evolving the system significantly. We developed an adaptive local-superbasin KMC algorithm (LSKMC) for treating fast, intra-basin motion using a Master-equation / Markov-chain approach and long-time evolution using KMC. Our algorithm is designed to identify local superbasins in an on-the-fly search during conventional KMC, construct the rate matrix, compute the mean exit time and its distribution, obtain the probability to exit to each of the superbasin border (absorbing) states, and integrate superbasin exits with non-superbasin moves. We demonstrate various aspects of the method in several examples, which also highlight the efficiency of the method.

This paper discusses the formulation and implementation of an acceleration approach for the MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation. The MCScene simulation is based on a Direct Simulation MonteCarlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. The model includes treatment of land and ocean surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will review an acceleration algorithm that exploits spectral redundancies in hyperspectral images. In this algorithm, the full scene is determined for a subset of spectral channels, and then this multispectral scene is unmixed into spectral end members and end member abundance maps. Next, pure end member pixels are determined at their full hyperspectral resolution, and the full hyperspectral scene is reconstructed from the hyperspectral end member spectra and the multispectral abundance maps. This algorithm effectively performs a hyperspectral simulation while requiring only the computational time of a multispectral simulation. The acceleration algorithm will be demonstrated, and errors associated with the algorithm will be analyzed.

Richtsmeier, Steven; Sundberg, Robert; Clark, Frank O.

This paper discusses the formulation and implementation of an acceleration approach for the MCScene code, a high fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation. The MCScene simulation is based on a Direct Simulation MonteCarlo approach for modeling 3D atmospheric radiative transport, as well as spatially inhomogeneous surfaces including surface BRDF effects. The model includes treatment of land and ocean surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will review an acceleration algorithm that exploits spectral redundancies in hyperspectral images. In this algorithm, the full scene is determined for a subset of spectral channels, and then this multispectral scene is unmixed into spectral end members and end member abundance maps. Next, pure end member pixels are determined at their full hyperspectral resolution, and the full hyperspectral scene is reconstructed from the hyperspectral end member spectra and the multispectral abundance maps. This algorithm effectively performs a hyperspectral simulation while requiring only the computational time of a multispectral simulation. The acceleration algorithm will be demonstrated, and errors associated with the algorithm will be analyzed.

Richtsmeier, Steven; Sundberg, Robert; Haren, Raymond; Clark, Frank O.

Mercury's sodium exosphere has been observed via ground-based, high-resolution optical telescopes since its discovery in 1985, and the processes behind the observed high temporal and spatial variability are still controversial after two decades of study. We have therefore undertaken a systematic modeling effort using a MonteCarlo technique to simulate the sources and sinks of the exosphere under various conditions. The assumed source processes are photonstimulated desorption (PSD), impact vaporization and ion sputtering. We assume that PSD is directly proportional to the incoming solar UV flux with a small temperature dependence, that impact vaporization by micrometeorites is uniform, and that ion sputtering depends on the assumed flux and energy of incoming ions, which can be set arbitrarily in the model. The interaction of atoms with the surface is set by two parameters that determine the probability of sticking and the exchange of energy with the surface. Loss is by Jeans escape, sticking at the surface, and photo-ionization. We currently do not track ions in this code nor the reemission of particles that stick to the surface. We present results for simulations at perihelion, aphelion, and true anomaly angles 63 and 297 degrees (where radiation pressure is greatest and the radial velocity of the planet with respect to the sun is positive and negative, respectively). These simulations will provide a basis for the interpretation of both ground-based and spacecraft data.

Killen, Rosemary; Vervack, Ronald; Mouawad, Nelly; Crider, Dana, , Dr

The near and intermediate range order diffractometer (NIMROD) has been selected as a day one instrument on the second target station at ISIS. Uniquely, NIMROD will provide continuous access to particle separations ranging from the interatomic (<1Å) to the mesoscopic (<300Å). This instrument is mainly designed for structural investigations, although the possibility of putting a Fermi chopper (and corresponding NIMONIC chopper) in the incident beam line, will potentially allow the performance of low resolution inelastic scattering measurements. The performance characteristics of the TOF diffractometer have been simulated by means of a series of MonteCarlo calculations. In particular, the flux as a function of the transferred momentum Q as well as the resolution in Q and transferred energy have been estimated. Moreover, the possibility of including a honeycomb collimator in order to achieve better resolution has been tested. Here, we want to present the design of this diffractometer that will bridge the gap between wide- and small-angle neutron scattering experiments.

Botti, A.; Ricci, M. A.; Bowron, D. T.; Soper, A. K.

We demonstrate that MonteCarlo sampling can be used to efficiently extract the expectation value of projected entangled pair states with a large virtual bond dimension. We use the simple update rule introduced by H. C. Jiang [Phys. Rev. LettPRLTAO0031-900710.1103/PhysRevLett.101.090603 101, 090603 (2008)] to obtain the tensors describing the ground state wave function of the antiferromagnetic Heisenberg model and evaluate the finite size energy and staggered magnetization for square lattices with periodic boundary conditions of linear sizes up to L=16 and virtual bond dimensions up to D=16. The finite size magnetization errors are 0.003(2) and 0.013(2) at D=16 for a system of size L=8,16, respectively. Finite D extrapolation provides exact finite size magnetization for L=8, and reduces the magnetization error to 0.005(3) for L=16, significantly improving the previous state-of-the-art results.

The accuracy of Density Functional Theory (DFT) is based on the exchange-correlation approximation used and needs to be checked by highly accurate quantum many-body approaches. We have performed calculations of the surface energies using the state-of-the-art diffusion quantum MonteCarlo (QMC) method to examine the accuracy of LDA and GGA (PBE) functionals in the study of surface energy. The systems studied include NaCl(100), MgO(100), CaO(100), TiO2(110), Si(100)-(2x2), C(100)-(2x2), and Ge(100)-(2x2) surfaces. Our results indicate that (i) the surface energy by DMC is always larger than the surface energy by LDA; and (ii) the surface energy by LDA is always larger than the surface energy by GGA. For the surface energies of NaCl(100) and MgO(100), the DMC results reproduce the experimental measured values accurately. To conclude, when compared the surface energies obtained by DFT and DMC, the results predicted by DFT using either LDA or GGA functional are underestimated.

An internal coordinate extension of diffusion MonteCarlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional. PMID:23410209

Surface adsorption is the first step to the study of surface catalytic reaction. The most common used tool is the Density Functional Theory (DFT) based on exchange-correlation approximations and the accuracy usually has not been checked carefully by highly accurate quantum many-body approaches. We have performed calculations of the surface adsorptions using the state-of-the-art diffusion quantum MonteCarlo (QMC) method to examine the accuracy of LDA and GGA (PBE) functionals in the study of surface adsorptions. The systems examined include the H2O and OH adsorptions on various types of surfaces such as NaCl(100), MgO(100), TiO2(110), graphene, Si(100)-(2x2) and Al(100). By comparing GGA (PBE) results with DMC, our results indicate that (i) for the H2O adsorption, PBE predicts the correct adsorption energies; (ii) for the OH adsorption, PBE has predicted a large over-binding effect except on graphene and Si(100) surfaces. This fact indicates that one needs to be cautious when using DFT to study the surface adsorptions of OH free radical.

Andrews et al (1972) carried out an extensive MonteCarlo study of robust estimators of location. Their conclusions were that the hampel and the skipped estimates, as classes, seemed to be preferable to some of the other currently fashionable estimators. ...

MonteCarlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO MonteCarlo eigenvalue codes facilitate the use of SCALE MonteCarlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy MonteCarlo codes within the SCALE code system.

Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL; Wiarda, Dorothea [ORNL; Peplow, Douglas E. [ORNL; Rearden, Bradley T [ORNL; Dunn, Michael E [ORNL

Path integral quantum MonteCarlo is used to simulate hot dense plasmas and other systems where quantum and thermal fluctuations are important. The fixed node approximation---ubiquitous in ab initio ground state Quantum MonteCarlo---is more complicated at finite temperatures, with many unanswered questions. In this talk I discuss the current state of fermionic path integral quantum MonteCarlo, with an emphasis on molecular systems where good benchmark data exists. We look at two ways of formulating the fixed node constraint and strategies for constructing finite-temperature nodal surfaces. We compare different the free energies of different nodal choices by sampling an ensemble of nodal models within a MonteCarlo simulation. We also present data on imaginary-time correlation fluctuations, which can be surprisingly accurate for molecular vibrations and polarizabilty.

A generalized, three-dimensional MonteCarlo model and computer code (SPOOR) are described for simulating atmospheric transport and dispersal of small pollutant clouds. A cloud is represented by a large number of particles that we track by statistically s...

This report details some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-energy MonteCarlo code for use on the CYBER-205 computer. While the principal applicatio...

Vector computers provide a new tool for management scientists. The application of that tool requires thinking in vector mode. The mode is examined in the context of MonteCarlo experiments with regression models; these regression models serve as metamodel...

This paper advances the state-of-the-art in spray computations with some of our recent contributions involving scalar MonteCarlo PDF (Probability Density Function), unstructured grids and parallel computing. It provides a complete overview of the scalar ...

A review of current methods and difficulties in MonteCarlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, MonteCarlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group MonteCarlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with MonteCarlo applications. 29 refs.

We developed a heavy ion transport MonteCarlo code HETC-CYRIC which can treat the fragments produced by heavy ion reactions. The HETC-CYRIC code is made by incorporating a heavy ion reaction calculation routine, which consists of the HIC code, the SPAR code, and the Shen formula, into the hadron transport MonteCarlo code HETC-3-STEP. The results calculated with the HETC-CYRIC

Hiroshi Iwase; Tadahiro Kurosawa; Takashi Nakamura; Nobuaki Yoshizawa; Jun Funabiki

Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is\\u000a intractable, or expensive to calculate. To improve over Markov chain MonteCarlo (MCMC) implementations of ABC, the use of\\u000a sequential MonteCarlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available\\u000a for ABC have a computational complexity that

Program SPATS models the transport of neutral particles during magnetron sputtering deposition. The 3D MonteCarlo simulation provides information about spatial distribution of the fluxes, density of the sputtered particles in the chamber glow discharge area, and kinetic energy distribution of the arrival flux. Collision events are modelled by scattering in Biersack's potential, Lennard-Jones potential, or by binary hard sphere collision approximation. The code has an interface for MonteCarlo TRIM simulated results of the sputtered particles.

Green's Function MonteCarlo methods have been developed to study the ground state properties of light nuclei. These methods are shown to reproduce results of Faddeev calculations for A = 3, and are then used to calculate ground state energies, one- and two-body distribution functions, and the D-state probability for the alpha particle. Results are compared to variational MonteCarlo calculations for several nuclear interaction models. 31 refs.

Predicting wear of materials under three-body abrasion is a challenging project, since three-body abrasion is more complicated than two-body abrasion. In this paper, a MonteCarlo model for simulating plastic deformation wear rate, i.e. low-cycle fatigue wear rate, is proposed. The Manson–Coffin formula and the Palmgrom–Miner linear accumulated-damage principle were used in the model as well as the MonteCarlo

Liang Fang; Weimin Liu; Daoshan Du; Xiaofeng Zhang; Qunji Xue

s — In this paper, we extend the techniques used in Grid-based MonteCarlo appli- cations to Grid-based quasi-MonteCarlo applications. These techniques include an N-out-of-M strategy for efficiently scheduling subtasks on the Grid, lightweight checkpointing for Grid sub- task status recovery, a partial result validation scheme to verify the correctness of each individual partial result, and an intermediate result

A new approximate method has been developed by Richard E. Prael to allow S({alpha},{beta}) thermal collision contributions to next-event estimators in MonteCarlo calculations. The new technique is generally applicable to next-event estimator contributions from any discrete probability distribution. The method has been incorporated into Version 4 of the production MonteCarlo neutron and photon radiation transport code MCNP. 9 refs.

A one-dimensional time dependent monteCarlo numerical computation of sparkignited premixed flames propagating in isotropic\\u000a turbulence is described. The MonteCarlo method is a statistical fluid particle tracking method for modeling turbulence. The\\u000a presumed method uses the probability distribution function (PDF) method in order to avoid full solution of the flow equations.\\u000a The model simulates flame propagation in a homogenous,

CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.

Kilbinger, Martin; Benabed, Karim; Cappé, Olivier; Coupon, Jean; Cardoso, Jean-François; Fort, Gersende; McCracken, Henry Joy; Prunet, Simon; Robert, Christian P.; Wraith, Darren

Most criticality safety calculations are performed using MonteCarlo techniques because of MonteCarlo's ability to handle complex three-dimensional geometries. For MonteCarlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run MonteCarlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For MonteCarlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the MonteCarlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

This paper describes an extension of the perturbation MonteCarlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation MonteCarlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation MonteCarlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional MonteCarlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline MonteCarlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation MonteCarlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ?15-25% of the scattering parameters. PMID:24156056

This paper describes an extension of the perturbation MonteCarlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation MonteCarlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation MonteCarlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional MonteCarlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline MonteCarlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation MonteCarlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ?15–25% of the scattering parameters.

We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the MonteCarlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for

The alias method is a MonteCarlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte

Two MonteCarlo systems, EGSnrc and Geant4, the latter with two different “physics lists,” were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

Diffusion MonteCarlo (DMC) is a powerful technique for studying the properties of molecules and clusters that undergo large-amplitude, zero-point vibrational motions. However, the overall applicability of the method is limited by the need to work in Cartesian coordinates and therefore have available a full-dimensional potential energy surface (PES). As a result, the development of a reduced-dimensional DMC methodology has the potential to significantly extend the range of problems that DMC can address by allowing the calculations to be performed in the subset of coordinates that is physically relevant to the questions being asked, thereby eliminating the need for a full-dimensional PES. As a first step towards this goal, we describe here an internal coordinate extension of DMC that places no constraints on the choice of internal coordinates other than requiring them all to be independent. Using H_3^+ and its isotopologues as model systems, we demonstrate that the methodology is capable of successfully describing the ground state properties of highly fluxional molecules as well as, in conjunction with the fixed-node approximation, the ?=1 vibrationally excited states. The calculations of the fundamentals of H_3^+ and its isotopologues provided general insights into the properties of the nodal surfaces of vibrationally excited states. Specifically, we will demonstrate that analysis of ground state probability distributions can point to the set of coordinates that are less strongly coupled and therefore more suitable for use as nodal coordinates in the fixed-node approximation. In particular, we show that nodal surfaces defined in terms of the curvilinear normal mode coordinates are reasonable for the fundamentals of H_2D^+ and D_2H^+ despite both molecules being highly fluxional.

This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, the authors are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u{sub 0} {equivalent_to} {mu} in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u{sub 0}. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, the authors numerically compare these two mathematical approaches. In fact, they are able to synthesize the understanding they gain from the numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, they can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. The approach is to use MonteCarlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter {var_epsilon} of the governing equation. The authors give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. They observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. They explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. They also describe the dynamics of these exceptional solutions.

Sander, E. [George Mason Univ., Fairfax, VA (United States). Dept. of Mathematical Sciences; Wanner, T. [Univ. of Maryland, Baltimore, MD (United States). Dept. of Mathematics and Statistics

The major achievements enabled by QMC Endstation grant include * Performance improvement on clusters of x86 multi-core systems, especially on Cray XT systems * New and improved methods for the wavefunction optimizations * New forms of trial wavefunctions * Implementation of the full application on NVIDIA GPUs using CUDA The scaling studies of QMCPACK on large-scale systems show excellent parallel efficiency up to 216K cores on Jaguarpf (Cray XT5). The GPU implementation shows speedups of 10-15x over the CPU implementation on older generation of x86. We have implemented hybrid OpenMP/MPI scheme in QMC to take advantage of multi-core shared memory processors of petascale systems. Our hybrid scheme has several advantages over the standard MPI-only scheme. * Memory optimized: large read-only data to store one-body orbitals and other shared properties to represent the trial wave function and many-body Hamiltonian can be shared among threads, which reduces the memory footprint of a large-scale problem. * Cache optimized: the data associated with an active Walker are in cache during the compute-intensive drift-diffusion process and the operations on an Walker are optimized for cache reuse. Thread-local objects are used to ensure the data affinity to a thread. * Load balanced: Walkers in an ensemble are evenly distributed among threads and MPI tasks. The two-level parallelism reduces the population imbalance among MPI tasks and reduces the number of point-to-point communications of large messages (serialized objects) for the Walker exchange. * Communication optimized: the communication overhead, especially for the collective operations necessary to determine ET and measure the properties of an ensemble, is significantly lowered by using less MPI tasks. The multiple forms of parallelism afforded by QMC algorithms make them ideal candidates for acceleration in the many-core paradigm. We presented the results of our effort to port the QMCPACK simulation code to the NVIDIA CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum MonteCarlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular hydrogen on Ti(1,2)ethylene-nH2 using Diffusion MonteCarlo. This work has been started and is o

Growth of polymer films continues to be of great interest to researchers both for the understanding of the underlying physics as well as the applications in developing new materials. Computer simulations have proven to be useful tools in the study of polymer systems, and stochastic (MonteCarlo) simulations are used here to investigate growing polymer films by deposition. The polymer chains move on a cubic lattice where each monomer unit can move according to a set of rules and are driven towards the substrate by an external field. We begin with the relatively slow (single-monomer) kink-jump dynamics, however, incorporation of faster modes such as crankshaft and reptation movements seems crucial in relaxing the interface width. The structure of the chains are analyzed by evaluating the conformation at the wall, in the bulk, at the interface, and in solution. The polymer density profile is also examined at the substrate, throughout the bulk, and at the interface. Growth and roughness of the interface for deposited polymer chains are studied by evaluating the interface width, its development over time, steady-state, and equilibrium values. The growth characteristics for the interface are compared to those using particle deposition models. Also, the dependence of the interface width on chain length, field strength, and temperature is investigated by varying these parameters in order to establish empirical laws and scaling relationships.

We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced MonteCarlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based MonteCarlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain MonteCarlo methods and Sequential MonteCarlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.

Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

The full configuration interaction quantum MonteCarlo (FCIQMC) method[1-3] provides access to the exact ground state energy. However, like diffusion MonteCarlo, it is hard to precisely calculate expectation values of operators which do not commute with the Hamiltonian due to the stochastic representation of the wavefunction. Following related work on diffusion MonteCarlo[4], we have formulated an approach to stochastically sample additional operators in FCIQMC by using the Hellmann-Feynman theorem and sampling pumped equations of motion coupled to the standard equation of motion used to evolve the wavefunction. Our approach requires only minor modifications to existing FCIQMC programs and can be used to evaluate expectation values of arbitrary operators. We will present example calculations on the Hubbard model and molecular systems. [1pt] [1] G.H. Booth, A.J.W. Thom, A. Alavi, J. Chem. Phys. 131, 054106 (2009). [2] D. Cleland, G.H. Booth, A. Alavi, J. Chem. Phys. 132, 041103 (2010). [3] J.S. Spencer, N.S. Blunt, W.M.C. Foulkes, J. Chem. Phys. 136, 054110 (2012). [4] R. Gaudoin, J.M. Pitarke, Phys. Rev. Lett. 99, 126406 (2007).

Uncertainty exists in retirement planning. The purpose of this thesis was to develop a stochastic retirement planning model to aid military personnel and decision/policy makers in evaluating retirement planning issues from a probabilistic perspective. The...

MonteCarlo techniques have become ubiquitous in medical physics over the last 50 years with a doubling of papers on the subject every 5 years between the first PMB paper in 1967 and 2000 when the numbers levelled off. While recognizing the many other roles that MonteCarlo techniques have played in medical physics, this review emphasizes techniques for electron-photon transport simulations. The broad range of codes available is mentioned but there is special emphasis on the EGS4/EGSnrc code system which the author has helped develop for 25 years. The importance of the 1987 Erice Summer School on MonteCarlo techniques is highlighted. As an illustrative example of the role MonteCarlo techniques have played, the history of the correction for wall attenuation and scatter in an ion chamber is presented as it demonstrates the interplay between a specific problem and the development of tools to solve the problem which in turn leads to applications in other areas. This paper is dedicated to W Ralph Nelson and to the memory of Martin J Berger, two men who have left indelible marks on the field of MonteCarlo simulation of electron-photon transport.

The physical mechanisms that describe the components of NaI, Ge, and SiLi detector response have been investigated using MonteCarlo simulation. The mechanisms described focus on the shape of the Compton edge, the magnitude of the flat continuum, and the shape of the exponential tails features. These features are not accurately predicted by previous MonteCarlo simulation. Probable interaction mechanisms for each detector response component is given based on this MonteCarlo simulation. Precollision momentum of the electron is considered when simulating incoherent scattering of the photon. The description of the Doppler broadened photon energy spectrum corrects the shape of the Compton edge. Special attention is given to partial energy loss mechanisms in the frontal region of the detector like the escape of photoelectric and Auger electrons or low-energy X-rays from the detector surface. The results include a possible physical mechanism describing the exponential tail feature that is generated by a separate MonteCarlo simulation. Also included is a description of a convolution effect that accounts for the difference in magnitude of the flat continuum in the MonteCarlo simulation and experimental spectra. The convolution describes an enhanced electron loss. Results of these applications are discussed.

The central limit theorem can be applied to a MonteCarlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of MonteCarlo tally being used. The MonteCarlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of MonteCarlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, MonteCarlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.

In this dissertation I describe three main research projects in which I have participated as a graduate student. They share the common theme of using MonteCarlo computer simulation to investigate quantum field theories. I begin by giving a brief review of MonteCarlo simulation as a discrete path integral approach to a quantum theory. Two of the projects involve tests of the MonteCarlo renormalization group method, a systematic way of integrating out short distance features of a physical system in order to gain insight about its critical behavior, and hence its continuum limit. After a review of the ideas of the renormalization group, I discuss our thorough investigation of MonteCarlo renormalization of ?4 field theory on a two-dimensional square lattice. The second renormalization project overlaps with the other main thrust of my research, studying quantum gravity as the continuum limit of a sum over all possible ways of piecing together discrete simplices, or simplicial quantum gravity. I describe a unique MonteCarlo renormalization group study of scalar fields coupled to two-dimensional quantum gravity, where we were able to extract the anomalous field dimension for a case inaccessible to analytic methods. Finally I discuss a study of four-dimensional quantum gravity coupled to gauge fields and special concerns one must be aware of when measuring connected correlators in fluctuating geometry.

The steady increase in computational performance has made MonteCarlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The MonteCarlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel MonteCarlo. Recognizing the nature of the MonteCarlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose MonteCarlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected.

Wagner, J.C.; Haghighat, A. [Pennsylvania State Univ., University Park, PA (United States)

The purpose of this investigation was to compare and validate the performance of the SIERRA MonteCarlo simulation routines for the analysis of the scatter to primary ratio (SPR) in the mammography setting. Two MonteCarlo simulation methods were addressed, the direct method was a straightforward and geometrically accurate simulation procedure, and the convolution method uses idealized geometry (monoenergetic, normally incident delta function input to the scattering medium) to produce scatter point spread functions (PSFs). The PSFs were weighted by the x-ray spectrum of interest and convolved with the field of view to estimate SPR values. The SPR results of both MonteCarlo procedures were extensively compared to five published sources, including Monte-Carlo-derived and physically measured SPR assessments. The direct method demonstrated an overall agreement with the literature of 3.7% accuracy (N=5), and the convolution method demonstrated an average of 7.1% accuracy (N=14). The comparisons were made over a range of parameters which included field of view, phantom thickness, x-ray energy, and phantom composition. Limitations of the beam stop method were also discussed. The results suggest that the SIERRA MonteCarlo routines produce accurate SPR calculations and may be useful for a more comprehensive study of scatter in mammography. PMID:10984229

We report on the results of both the diffusion quantum MonteCarlo (DMC) and reptation quantum MonteCarlo (RMC) methods on the potential energy curve of the helium dimer. We show that it is possible to obtain a highly accurate description of the helium dimer. An improved stochastic reconfiguration technique is employed to optimize the many-body wave function, which is the starting point for highly accurate simulations based on the DMC and RMC methods. We find that the results of these methods are in excellent agreement with the best theoretical results at short range, especially the recently developed RMC method, yield particularly accurate results with reduced statistical error, which gives very excellent agreement across the whole potential curve. For the equilibrium internuclear distance of 5.6 bohrs, the calculated total energy with RMC method is -5.807 483 599+/-0.000 000 016 hartree and the corresponding well depth is -11.003+/-0.005 K.

An efficient O(N) cluster MonteCarlo method for Ising models with long-range interactions is presented. Our novel algorithm does not introduce any cutoff for interaction range and thus it strictly fulfills the detailed balance. The realized stochastic dynamics is equivalent to that of the conventional Swendsen-Wang algorithm, which requires O(N{sup 2}) operations per MonteCarlo sweep if applied to long-range interacting models. In addition, it is shown that the total energy and the specific heat can also be measured in O(N) time. We demonstrate the efficiency of our algorithm over the conventional method and the O(NlogN) algorithm by Luijten and Bloete. We also apply our algorithm to the classical and quantum Ising chains with inverse-square ferromagnetic interactions, and confirm in a high accuracy that a Kosterlitz-Thouless phase transition, associated with a universal jump in the magnetization, occurs in both cases.

Fukui, Kouki [Department of Applied Physics, University of Tokyo, 7-3-1 Hongo, Tokyo 113-8656 (Japan); Todo, Synge [Department of Applied Physics, University of Tokyo, 7-3-1 Hongo, Tokyo 113-8656 (Japan); CREST, Japan Science and Technology Agency, Kawaguchi 332-0012 (Japan)], E-mail: wistaria@ap.t.u-tokyo.ac.jp

Potential new areas of application for the MonteCarlo method are discussed. The computation of the heat transfer and drag characteristics of aerobraked vehicles is one significant application, since the latitudes at which these systems would be deployed extend above the range which can be experimentally simulated. The application of MonteCarlo techniques to gas film lubrication problems associated with head-tape and head-disk interactions would represent the first time that this procedure has been applied in this extremely low speed regime. Some computational techniques which can be used to advantage in MonteCarlo calculations are also outlined, including the use of transformed body-fitted coordinate systems to reduce the time required to identify the cell location of a molecule and the use of an adaptive cell structure to place cells in preferred locations as the flowfield develops.

Most MonteCarlo eigenvalue calculations are based on power iteration methods, like those used in analytical algorithms. But if N/sub H/, the number of histories in each generation is fixed, then such MonteCarlo calculations will be biased. Various arguments lead to the conclusion that eigenvalue and shape biases are both proportional to 1/N/sub H/, but little more is known about their magnitudes. Numerical experiments on simple matrices suggest that the biases are small, but information more relevant to real reactor calculations is very sparse. In fact to determine the bias in real reactor calculations is quite expensive. It seems worthwhile, therefore, to try to understand the MonteCarlo biases in systems more realistic than arbitrary matrices, but simpler than real reactors. For this reason biases in simple one-group model problems have been computed.

For many scientific calculations, MonteCarlo is the only practical method available. Unfortunately, standard MonteCarlo methods converge slowly as the square root of the computer time. We have shown, both numerically and theoretically, that the convergence rate can be increased dramatically if the MonteCarlo algorithm is allowed to adapt based on what it has learned from previous samples. As the learning continues, computational efficiency increases, often geometrically fast. The particle transport work achieved geometric convergence for a two-region problem as well as for problems with rapidly changing nuclear data. The statistics work provided theoretical proof of geometic convergence for continuous transport problems and promising initial results for airborne migration of particles. The statistical physics work applied adaptive methods to a variety of physical problems including the three-dimensional Ising glass, quantum scattering, and eigenvalue problems.

MonteCarlo sampling techniques have been proposed as a strategy to reduce the computational cost of contractions in tensor network approaches to solving many-body systems. Here, we put forward a variational MonteCarlo approach for the multiscale entanglement renormalization ansatz (MERA), which is a unitary tensor network. Two major adjustments are required compared to previous proposals with nonunitary tensor networks. First, instead of sampling over configurations of the original lattice, made of L sites, we sample over configurations of an effective lattice, which is made of just ln(L) sites. Second, the optimization of unitary tensors must account for their unitary character while being robust to statistical noise, which we accomplish with a modified steepest descent method within the set of unitary tensors. We demonstrate the performance of the variational MonteCarlo MERA approach in the relatively simple context of a finite quantum spin chain at criticality, and discuss future, more challenging applications, including two-dimensional systems.

A robust multivariate time series method has been established for the MonteCarlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy MonteCarlo calculation. CMPM was incorporated in the MCNP MonteCarlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor.

The remarkable accuracy of MonteCarlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of this dissertation the design, implementation and evaluation of a technique for reducing a latent variance inherent from the recycling of phase space particle tracks in a simulation is presented. In the technique a random azimuthal rotation about the beam's central axis is applied to each recycled particle, achieving a significant reduction of the latent variance. In a third component, the dissertation presents the first MC modeling of Varian's new RapidArc delivery system and a comparison of dose calculations with the Eclipse treatment planning system. A total of four arc plans are compared including an oropharynx patient phantom containing tissue inhomogeneities. Finally, in a step toward introducing MC dose calculation into the planning of treatments such as RapidArc, a technique is presented to feasibly generate and store a large set of MC calculated dose distributions. A novel 3-D dyadic multi-resolution (MR) decomposition algorithm is presented and the compressibility of the dose data using this algorithm is investigated. The presented MC beamlet generation method, in conjunction with the presented 3-D data MR decomposition, represents a viable means to introduce MC dose calculation in the planning and optimization stages of advanced radiotherapy.

Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, MonteCarlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the MonteCarlo simulation for this study. The dose is calculated using MonteCarlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using MonteCarlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to MonteCarlo for a 18mm-collimator Helmet. For patient geometries, the MonteCarlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 ± 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.

Multiferroic Bismuth Ferrite (BiFeO3) exhibits both ferroelectricity and antiferromagnetism, possibly enabling a connection between the two effects in the same material. While its antiferromagnetic character is relatively well-understood, experimental measurements of the spontaneous polarization vary significantly over two orders of magnitude, from 0.06 C/m^2 to 1.50 C/m^2. We cary out accurate quantum MonteCarlo calculations to estimate the cohesion energy and the ferroelectric distortion well depth. We discuss the mechanisms proposed to understand the variations of polarization experimental data in the light of our quantum MonteCarlo results.

MonteCarlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These MonteCarlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

MonteCarlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These MonteCarlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

MonteCarlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in MonteCarlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.

The range and straggling data obtained from the transport of ions in matter (TRIM) computer program were used to determine the trajectories of monoenergetic 60 MeV protons in muscle tissue by using the MonteCarlo technique. The appropriate profile for the shape of a proton pencil beam in proton therapy as well as the dose deposited in the tissue were computed. The good agreements between our results as compared with the corresponding experimental values are presented here to show the reliability of our MonteCarlo method. PMID:16094775

We describe PEPSI (Polarized Electron Proton Scattering Interactions), a MonteCarlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund MonteCarlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in ?S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

We present three generalized isobaric-isothermal ensemble MonteCarlo algorithms, which we refer to as the multibaric-multithermal, multibaric-isothermal, and isobaric-multithermal algorithms. These MonteCarlo simulations perform random walks widely in volume space and/or in potential energy space. From only one simulation run, one can calculate isobaric-isothermal-ensemble averages in wide ranges of pressure and temperature. We demonstrate the effectiveness of these algorithms by applying them to the Lennard-Jones 12-6 potential system with 500 particles. PMID:15447615

Continuous monitoring of cerebral blood oxygenation is critically important for the management of many lifethreatening conditions. Non-invasive monitoring of cerebral blood oxygenation with a photoacoustic technique offers advantages over current invasive and non-invasive methods. We introduce a MonteCarlo XYZ-PA to model the energy deposition in 3D and the time-resolved pressures and velocity potential based on the energy absorbed by the biological tissue. This paper outlines the benefits of using MonteCarlo XYZ-PA for optimization of photoacoustic measurement and imaging. To the best of our knowledge this is the first fully integrated tool for photoacoustic modelling.

Zam, Azhar; Jacques, Steven L.; Alexandrov, Sergey; Li, Youzhi; Leahy, Martin J.

Purpose: Monitor unit (MU) calculations for electron arc therapy were carried out using MonteCarlo simulations and verified by measurements. Variations in the dwell factor (DF), source-to-surface distance (SSD), and treatment arc angle ({alpha}) were studied. Moreover, the possibility of measuring the DF, which requires gantry rotation, using a solid water rectangular, instead of cylindrical, phantom was investigated. Methods: A phase space file based on the 9 MeV electron beam with rectangular cutout (physical size=2.6x21 cm{sup 2}) attached to the block tray holder of a Varian 21 EX linear accelerator (linac) was generated using the EGSnrc-based MonteCarlo code and verified by measurement. The relative output factor (ROF), SSD offset, and DF, needed in the MU calculation, were determined using measurements and MonteCarlo simulations. An ionization chamber, a radiographic film, a solid water rectangular phantom, and a cylindrical phantom made of polystyrene were used in dosimetry measurements. Results: Percentage deviations of ROF, SSD offset, and DF between measured and MonteCarlo results were 1.2%, 0.18%, and 1.5%, respectively. It was found that the DF decreased with an increase in {alpha}, and such a decrease in DF was more significant in the {alpha} range of 0 deg. - 60 deg. than 60 deg. - 120 deg. Moreover, for a fixed {alpha}, the DF increased with an increase in SSD. Comparing the DF determined using the rectangular and cylindrical phantom through measurements and MonteCarlo simulations, it was found that the DF determined by the rectangular phantom agreed well with that by the cylindrical one within {+-}1.2%. It shows that a simple setup of a solid water rectangular phantom was sufficient to replace the cylindrical phantom using our specific cutout to determine the DF associated with the electron arc. Conclusions: By verifying using dosimetry measurements, MonteCarlo simulations proved to be an alternative way to perform MU calculations effectively for electron arc therapy. Since MonteCarlo simulations can generate a precalculated database of ROF, SSD offset, and DF for the MU calculation, with a reduction in human effort and linac beam-on time, it is recommended that MonteCarlo simulations be partially or completely integrated into the commissioning of electron arc therapy.

Chow, James C. L.; Jiang Runqing [Radiation Medicine Program, Princess Margaret Hospital, University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Department of Medical Physics, Grand River Regional Cancer Center, Kitchener, Ontario N2G 1G3 (Canada)

Since its publication, the reptation quantum MonteCarlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum MonteCarlo' to stabilize the middle of the reptile.

Kong Yuen, Wai; Farrar, Thomas J.; Rothstein, Stuart M.

We present a new evaluation of thermonuclear reaction rates for astrophysics involving proton and alpha-particle induced reactions, in the target mass range between A = 14 and 40, including many radioactive targets. A method based on MonteCarlo techniques is used to evaluate thermonuclear reaction rates and their uncertainties. At variance with previous evaluations, the low, median and high rates are statistically defined and a lognormal approximation to the rate distribution is given. This provides improved input for astrophysical model calculations using also the MonteCarlo method to estimate uncertainties on isotopic abundances.

Coc, A. [Centre de Spectrometrie Nucleaire et de Spectrometrie de Masse (CSNSM), UMR 8609, CNRS/IN2P3 (France) and Universite Paris Sud 11, Batiment 104, 91405 Orsay Campus (France); Iliadis, C.; Longland, R.; Champagne, A. E. [Department of Physics and Astronomy, University of North Carolina, Chapel Hill, NC 27599-3255 (United States) and Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States); Fitzgerald, R. [National Institute of Standards and Technology, 100 Bureau Drive, Stop 8462, Gaithersburg, MD 20899-8462 (United States)

The size distribution of metastatic tumors and its time evolution are traditionally described by integrodifferential equations and stochastic models. Here we develop a simple MonteCarlo approach in which each event of metastasis is treated as a chance event through random-number generation. We demonstrate the accuracy of this approach on a specific growth and metastasis model by showing that it quantitatively reproduces the size distribution and the total number of tumors as a function of time. The approach also yields statistical distribution of patient-to-patient variations, and has the flexibility to incorporate many real-life complexities.

We present an approach for ab initio many-body calculations of excited states in solids. Using auxiliary-field quantum MonteCarlo, we introduce an orthogonalization constraint with virtual orbitals to prevent collapse of the stochastic Slater determinants in the imaginary-time propagation. Trial wave functions from density-functional calculations are used for the constraints. Detailed band structures can be calculated. Results for standard semiconductors are in good agreement with experiments; comparisons are also made with GW calculations and the connections and differences are discussed. For the challenging ZnO wurtzite structure, we obtain a fundamental band gap of 3.26(16) eV, consistent with experiments.

We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools.

Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.

Global sensitivity indices for rather complex mathematical models can be efficiently computed by MonteCarlo (or quasi-MonteCarlo) methods. These indices are used for estimating the influence of individual variables or groups of variables on the model output.

Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian MonteCarlo (BMC) analysis. Bayesian MonteCarlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid MonteCarlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.

Ibrahim, A. [University of Wisconsin; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Sawan, M. [University of Wisconsin; Wilson, P. [University of Wisconsin; Wagner, John C [ORNL; Heltemes, Thad [University of Wisconsin, Madison

The nonequilibrium behaviour of electrons in a drift tube in SF6 is investigated using a MonteCarlo simulation. It is shown that, in the case of a steady-state experiment, the mean properties of the electrons (drift velocity, number density, mean energy) present some strong spatial oscillations and that attachment occurs in some very localised regions of the gap. The question

|The practicality and usefulness of variational MonteCarlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.|

SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for MonteCarlo Analysis.

Several lattice versions of the Gross-Neveu model are constructed and studied using MonteCarlo methods. The expected shiral structures are confirmed by the numerical results. The correct asymptotic freedom behaviour is recovered with the appropriate number of species taken into account. The models differ in their number of soft modes and their strong coupling behaviour. In some of them, chiral

In this paper we study a recent generalization of the XY-model in two dimensions by using MonteCarlo method. The vortex density, specific heat, energy and critical temperature are obtained. Some results are compared with approximated analytical calculations. The nature of the phase transition as the generalization parameter varies is discussed.

L. A. S. Mól; A. R. Pereira; H. Chamati; S. Romano

A MonteCarlo algorithm is described that can be used in place of the nested bootstrap. It is particularly advantageous when there is a premium on the number of bootstrap samples, either because samples are hard to generate or because expensive computations are applied to each sample. This recycling algorithm is useful because it enables inference procedures like prepivoting and

To understand the self-organization of magnetic nanocrystals in an applied field, we perform MonteCarlo simulations of Stockmayer fluids confined between two parallel walls. The system is examined in the gas-liquid coexistence region of its phase diagram and the field is applied perpendicular to the walls. Gibbs ensemble simulations are carried out to determine the phase coexistence curves of the

Following Bender et al., we apply the finite element method to a compact non-abelian system. We do a hamiltonian MonteCarlo on a 0+1 dimensional lattice and compare the results with those obtained by a finite difference method. For the kinetic energy, the Feynman-Hibbs prescription is followed and the finite element method is shown to be distinctly superior.

ITS (Integrated Tiger Series) permits a state-of-the-art MonteCarlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

The MonteCarlo technique using a simple single scattering model, different from Reimer's, is applied to the field of the Electron Probe Microanalyzer (EPMA) and the Scanning Electron Microscope (SEM). The calculations show fairly good agreement with the experiments on back-scattered electrons, energy dissipation, characteristic X-ray production and secondary electron emission. Also the electron trajectories are depicted on the chart

We consider in the present paper an extension of numerical path integral methods for use in computing finite temperature time correlation functions. We demonstrate that coordinate rotation techniques extend appreciably the time domain over which MonteCarlo methods are of use in the construction of such correlation functions.

A reverse MonteCarlo radiative transfer code is developed to predict rocket plume base heating. It is more computationally efficient than the forward MonteCarlo method, because only the radiation that strikes the receiving point is considered. The method easily handles both gas and particle emission and particle scattering. Band models are used for the molecular emission spectra, and the Henyey-Greenstein phase function is used for the scattering. Reverse MonteCarlo predictions are presented for (1) a gas-only model of the Space Shuttle main engine plume; (2) a purescattering plume with the radiation emitted by a hot disk at the nozzle exit; (3) a nonuniform temperature, scattering, emitting and absorbing plume; and (4) a typical solid rocket motor plume. The reverse MonteCarlo method is shown to give good agreement with previous predictions. Typical solid rocket plume results show that (1) CO2 radiation is emitted from near the edge of the plume; (2) H2O gas and Al2O3 particles emit radiation mainly from the center of the plume; and (3) Al2O3 particles emit considerably more radiation than the gases over the 400-17,000 cm(exp -1) spectral interval.

We propose the extended triple energy window (ETEW) method that improves quantitation and contrast in SPECT images. ETEW is a modification of the triple energy window (TEW) method which corrects for scatter by using abutted scatter rejection windows, which can overestimate or underestimate scatter. ETEW is compared to TEW using MonteCarlo simulated data for point sources as well as

Jung-Kyun Bong; Hye-Kyung Son; Jong Doo Lee; Hee-Joung Kim

MonteCarlo codes are extensively used for probabilistic simulations of various physical systems. These codes are widely used in calculations of neutron and gamma ray transport in soil for radiation shielding, soil activation by neutrons, well logging industry, and in simulations of complex nuclear gauges for in soil measurements. However, these calculations are complicated by the diversity of soils in

Lucian Wielopolski; Zhiguang Song; Itzhak Orion; Albert L. Hanson; George Hendrey

The central limit theorem can be applied to a MonteCarlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of MonteCarlo tally being used. The MonteCarlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of MonteCarlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity

A new approach of iterative MonteCarlo algorithms for the well-known inverse matrix problem is presented and studied. The algorithms are based on a special techniques of iteration parameter choice, which allows to control the convergence of the algorithm for any column (row) of the matrix using different relaxation parameters. The choice of these parameters is controlled by a posteriori

What actuaries call cash flow testing is a large-scale simulation pitting a company's current policy obligation against future earnings based on interest rates. While life contingency issues associated with contract payoff are a mainstay of the actuarial sciences, modeling the random fluctuations of US Treasury rates is less studied. Furthermore, applying standard simulation techniques, such as the MonteCarlo method,

MonteCarlo calculations of the rate of absorbed energy from a photon beam were carried out to compare the response of commercial plastic scintillators with that of air in the energy region below 1 MeV. We have found that for photon energies above 100 keV, the response of different kinds of plastics is proportional to that of air, while below

The shape space model of de Boer, Segel and Perelson for the immune system is studied with a probabilistic updating rule by MonteCarlo simulation. A suitable mathematical form is chosen for the probability of increase of B-cell concentration depending on the concentration around the mirror image site. The results obtained agree reasonably with the results obtained by deterministic cellular automata.

Experiences with vectorization of production-level MonteCarlo codes such as KENO-IV, MCNP, VIM, and MORSE have shown that it is difficult to attain high speedup ratios on vector processors because of indirect addressing, nests of conditional branches, short vector length, cache misses, and operations for realization of robustness and generality. A previous work has already shown that the first, second, and third difficulties can be resolved by using special computer hardware for vector processing of MonteCarlo codes. Here, the fourth and fifth difficulties are discussed in detail using the results for a vectorized version of the MORSE code. As for the fourth difficulty, it is shown that the cache miss-hit ratio affects execution times of the vectorized MonteCarlo codes and the ratio strongly depends on the number of the particles simultaneously tracked. As for the fifth difficulty, it is shown that remarkable speedup ratios are obtained by removing operations that are not essential to the specific problem being solved. These experiences have shown that if a production-level MonteCarlo code system had a capability to selectively construct source coding that complements the input data, then the resulting code could achieve much higher performance.

Higuchi, Kenji; Asai, Kiyoshi [Japan Atomic Energy Research Inst., Tokyo (Japan). Center for Promotion of Computational Science and Engineering; Hasegawa, Yukihiro [Research Organization for Information Science and Technology, Tokai, Ibaraki (Japan)

A practical guide for the implementation of the MORESE-CG MonteCarlo radiation transport computer code system is presented. The various versions of the MORSE code are compared and contrasted, and the many references dealing explicitly with the MORSE-CG code are reviewed. The treatment of angular scattering is discussed, and procedures for obtaining increased differentiality of results in terms of reaction

A practical guide for the implementation of the MORESE-CG MonteCarlo radiation transport computer code system is presented. The various versions of the MORSE code are compared and contrasted, and the many references dealing explicitly with the MORSE-CG c...

In this paper we present a classical MonteCarlo simulation of the orthorhombic phase of crystalline polyethylene, using an explicit atom force field with unconstrained bond lengths and angles and periodic boundary conditions. We used a recently developed algorithm which apart from standard Metropolis local moves employs also global moves consisting of displacements of the center of mass of the

The use of the MonteCarlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in radiation treatments of cancer patients using either external or radionuclide radiotherapy. This trend has continued for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment

The results of the standard local MonteCarlo are changed by offering instantons as candidates in the Metropolis procedure. We also define an O(3) topological charge with no contribution from planar dislocations. The RG behavior is still not recovered. Bantrell Fellow in Theoretical Physics.

This work presents a novel global illumination algorithm wh ich concentrates computation on important light transport paths and automatically adjusts energy distribu ted area for each light transport path. We adapt statis- tical framework of Population MonteCarlo into global illum ination to improve rendering efficiency. Information collected in previous iterations is used to guide subsequen t iterations by adapting

Yu-Chi Lai; Shaohua Fan; Stephen Chenney; Charcle Dyer

Recent advances in atomic force microscopy (AFM) have enabled researchers to obtain images of supercoiled DNAs deposited on mica surfaces in buffered aqueous milieux. Confining a supercoiled DNA to a plane greatly restricts its configurational freedom, and could conceivably alter certain structural properties, such as its twist and writhe. A program that was originally written to perform MonteCarlo simulations

|A MonteCarlo Study was conducted to evaluate six models commonly used to evaluate change. The results revealed specific problems with each. Analysis of covariance and analysis of variance of residualized gain scores appeared to substantially and consistently overestimate the change effects. Multiple factor analysis of variance models utilizing…

In this paper we present the thermodynamic properties of DNA hairpin studied by using non-Boltzmann MonteCarlo methods. The force-temperature phase diagram and Landau free energy near and at critical temperatures are obtained. From free energy curves it is observed that the transition from closed loop state to open state is of first order.

By a new MonteCarlo algorithm, we evaluate the sidedness probability pn of a planar Poisson-Voronoi cell in the range 3 <= n <= 1600. The algorithm is developed on the basis of earlier theoretical work; it exploits, in particular, the known asymptotic behaviour of pn as n --> ?. Our pn values all have between four and six significant

A generalized, three-dimensional MonteCarlo model and computer code ; (SPOOR) are described for simulating atmospheric transport and dispersal of small ; pollutant clouds. A cloud is represented by a large number of particles that we ; track by statistically sampling simulated wind and turbulence fields. These ; fields are based on generalized wind data for large-scale flow and turbulent

MonteCarlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature

Cameron B. Browne; Edward Powley; Daniel Whitehouse; Simon M. Lucas; Peter I. Cowling; Philipp Rohlfshagen; Stephen Tavener; Diego Perez; Spyridon Samothrakis; Simon Colton

By a new MonteCarlo algorithm, we evaluate the sidedness probability pn of a planar Poisson–Voronoi cell in the range 3 ? n ? 1600. The algorithm is developed on the basis of earlier theoretical work; it exploits, in particular, the known asymptotic behaviour of pn as n ? ?. Our pn values all have between four and six significant

The goal of this work was to develop an improved MonteCarlo method and implement a computer code for performing automatic integration of multidimensional integrals of the form integral f(X)dX over a closed region in k-dimensional Euclidean space, where X...

We have used MonteCarlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in

|Presents sports-oriented examples (cricket and football) in which MonteCarlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…

This paper presents an adaptation of the widely accepted MonteCarlo method for Multi-layered media (MCML). Its original Henyey-Greenstein phase function is an interesting approach for describing how light scattering inside biological tissues occurs. It has the important advantage of generating deflection angles in an efficient - and therefore computationally fast- manner. However, in order to allow the fast generation of the phase function, the MCML code generates a distribution for the cosine of the deflection angle instead of generating a distribution for the deflection angle, causing a bias in the phase function. Moreover, other, more elaborate phase functions are not available in the MCML code. To overcome these limitations of MCML, it was adapted to allow the use of any discretized phase function. An additional tool allows generating a numerical approximation for the phase function for every layer. This could either be a discretized version of (1) the Henyey-Greenstein phase function, (2) a modified Henyey-Greenstein phase function or (3) a phase function generated from the Mie theory. These discretized phase functions are then stored in a look-up table, which can be used by the adapted MonteCarlo code. The MonteCarlo code with flexible phase function choice (fpf-MC) was compared and validated with the original MCML code. The novelty of the developed program is the generation of a user-friendly algorithm, which allows several types of phase functions to be generated and applied into a MonteCarlo method, without compromising the computational performance.

A brief review of the semiclassical MonteCarlo (MC) method for semiconductor device simulation is given, covering the standard MC algorithms, variance reduction techniques, the self-consistent solution, and the physical semiconductor model. A link between physically based MC methods and the numerical method of MC integration is established. The integral representations the transient and the steady-state Boltzmann equations are presented

Hans Kosina; Michail Nedjalkov; Siegfried Selberherr

The central theme will be on the historical setting and origins of the MonteCarlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the MonteCarlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the MonteCarlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the MonteCarlo approach. One may even hope for a modest hardening of the soft sciences.

Traditional Markov Chain MonteCarlo methods suffer from low acceptance rate, slow mixing, and low efficiency in high dimensions. Hamiltonian MonteCarlo resolves this issue by avoiding the random walk. Hamiltonian MonteCarlo (HMC) is a Markov Chain MonteCarlo (MCMC) technique built upon the basic principle of Hamiltonian mechanics. Hamiltonian dynamics allows the chain to move along trajectories of constant energy, taking large jumps in the parameter space with relatively inexpensive computations. This new technique improves the acceptance rate by a factor of 4 while reducing the correlations and boosts up the efficiency by almost a factor of D in a D-dimensional parameter space. Therefore shorter chains will be needed for a reliable parameter estimation comparing to a traditional MCMC chain yielding the same performance. Besides that, the HMC is well suited for sampling from non-Gaussian and curved distributions which are very hard to sample from using the traditional MCMC methods. The method is very simple to code and can be easily plugged into standard parameter estimation codes such as CosmoMC. In this paper we demonstrate how the HMC can be efficiently used in cosmological parameter estimation. Also we discuss possible ways of getting good estimates of the derivatives of (the log of) posterior which is needed for HMC.

Hajian, Amir [Department of Physics, Jadwin Hall, Princeton University, P.O. Box 708, Princeton, New Jersey 08542 (United States); Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, New Jersey 08544 (United States)

|This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies MonteCarlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

Optical Coherence Tomography (OCT) is a new technique mainly used in biomedical imaging. We present here a Particle-Fixed MonteCarlo (PFMC) simulation for OCT signal. In the PFMC model, the scattering particles of the sample are assumed to be temporarily fixed and randomly distributed in the simulation of the backscattered light. An efficient partitioning scheme is proposed to speed up

Andrews et al (1972) carried out an extensive MonteCarlo study of robust estimators of location. Their conclusions were that the hampel and the skipped estimates, as classes, seemed to be preferable to some of the other currently fashionable estimators. The present study extends this work to include estimators not previously examined. The estimators are compared over short-tailed as well

When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the MonteCarlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The MonteCarlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP MonteCarlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the MonteCarlo method, and the revolution in science.

Hendricks, J. [Los Alamos National Lab., CA (United States)

This paper uses MonteCarlo techniques to assess the loss in terms of forecast accuracy which is incurred when the true data generation process (DGP) exhibits parameter instability which is either overlooked or incorrectly modelled. We find that the loss is considerable when a fixed coefficient models (FCM) is estimated instead of the true time varying parameter model (TVCM), this

Costas Anyfantakis; Guglielmo Maria Caporale; Nikitas Pittis

We present a variable metric Hybrid MonteCarlo method following the ideas in [3], and propose a choice of such a metric which results efficient in the case of the sampling from the potential of a stiff spring. This is the first step in the extension of these ideas to deal with more general potentials appearing in Molecular Dynamics.

Over the past 20 years, many problems in Bayesian inference that were previously intractable can now be fairly routinely dealt with using a computationally intensive technique for exploring the posterior distribution called Markov chain MonteCarlo (MCMC). Primarily because of insufficient computing capabilities, most MCMC applications have been limited to rather standard statistical models. However, with the computing power of

We heuristically discuss the application of the method of maximum entropy to the extraction of dynamical information from imaginary-time, quantum MonteCarlo data. The discussion emphasizes the utility of a Bayesian approach to statistical inference and the importance of statistically well-characterized data. 14 refs.

Gubernatis, J.E.; Silver, R.N. (Los Alamos National Lab., NM (United States)); Jarrell, M. (Cincinnati Univ., OH (United States))

Modern MonteCarlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications,

Kenneth E. Sale; Paul M. Bergstrom; Richard M. Buck; Dermot Cullen; D. Fujino; Christine Hartmann-Siantar

Calculations have been performed using the MonteCarlo code, MORSE-CG, to determine the neutron streaming through various straight and stepped gaps between radiation shield sectors in the conceptual tokamak fusion power plant design STARFIRE. This design calls for ''pie-shaped'' radiation shields with gaps between segments. It is apparent that some type of offset, or stepped gap, configuration will be necessary

Modern MonteCarlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of MonteCarlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper MonteCarlo methods become accessible more widely. An overview of the set of MonteCarlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.

Sale, Kenneth E.; Bergstrom, Paul M.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine

Risk assessment of modeling predictions is becoming increasingly important as input to decision makers. Probabilistic risk analysis is typically expensive to perform since it generallyrequires the calculation of a model output Probability Distribution Function (PDF) followed by the integration of the risk portion of the PDF. Here we describe the new risk analysis Guided MonteCarlo (GMC) technique. It maintains

We describe adaptive Markov chain MonteCarlo (MCMC) methods for sampling posterior distributions arising from Bayesian variable selection problems. Point mass mixture priors are commonly used in Bayesian variable selection problems in regres- sion. However, for generalized linear and nonlinear models where the conditional den- sities cannot be obtained directly, the resulting mixture posterior may be difficult to sample using

In this chapter, we describe three related studies of the universal physics of two-component unitary Fermi gases with resonant short-ranged interactions. First we discuss an ab initio auxiliary field quantum MonteCarlo technique for calculating thermodynamic properties of the unitary gas from first principles. We then describe in detail a Density Functional Theory (DFT) fit to these thermodynamic properties: the

Aurel Bulgac; Michael McNeil Forbes; Piotr Magierski

We study the evolution of ionization fronts around the first protogalaxies by using high-resolution numerical cosmological (Lambda+ cold dark matter, CDM, model) simulations and MonteCarlo radiative transfer methods. We present the numerical scheme in detail and show the results of test runs from which we conclude that the scheme is both fast and accurate. As an example of interesting

The present work is divided into two stages: 1. By using large numbers (several millions) of accurate orbit integrations with the K-S regularization, probability distributions for changes in the orbital elements of comets during encounters with planets are evaluated. 2. These distributions are used in a MonteCarlo simulation scheme which follows the evolution of orbits under repeated close encounters.

J. Q. Zheng; M. J. Valtonen; S. Mikkola; J. J. Matese; P. G. Whitman; H. Rickman

The MonteCarlo Program TRIM.SP (sputtering version of TRIM) was used to determine sputtering yields and energy and angular distributions of sputtered particles in physical (collisional) sputtering processes. The output is set up to distinguish between the contributions of primary and secondary knock-on atoms as caused by in- and outgoing incident ions, in order to get a better understanding of

An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by MonteCarlo calculation of particle transport, first in

An optoacoustic device consisting of YAG laser and a measurement cell with an attached piezotransducer was used to detect atmospherical microparticles as well as artificial latex suspension. MonteCarlo simulation was used to predict the statistical parameters of the acoustical response.

We consider the application of sequential MonteCarlo (SM- C) methodology to the problem of joint mobility tracking and soft hand- off detection in cellular wireless communication networks based on the pi- lot signal strength measurements. The dynamics of the system under con- sideration are described by a nonlinear state-space model. Mobility track- ing involves an on-line estimation of the

The functional expansion technique (FET) was recently developed for MonteCarlo simulation. The basic idea of the FET is to expand a MonteCarlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional MonteCarlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production MonteCarlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up MonteCarlo fission source convergence.

William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

This work introduces an EGSnrc-based MonteCarlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as MonteCarlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate MonteCarlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5x5.0 mm{sup 2} beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final MonteCarlo calculation with MLC modeling is used to compute the final dose distribution. MonteCarlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is {approx}33% compared to fluence-based optimization methods.

Bergman, Alanah M.; Bush, Karl; Milette, Marie-Pierre; Popescu, I. Antoniu; Otto, Karl; Duzenli, Cheryl [Department of Physics and Astronomy, University of British Columbia, Vancouver, British Columbia (Canada); Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia (Canada); Department of Physics and Astronomy, University of British Columbia, Vancouver, British Columbia (Canada); Medical Physics, BC Cancer Agency-Vancouver Centre, Vancouver, British Columbia (Canada)

The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast MonteCarlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included in the SLAC ILC group's org.lcsim package, reads in standard model or SUSY events in STDHEP file format, stochastically simulates the blurring in physics measurements caused by intrinsic detector error, and writes out an LCIO format file containing a set of final particles statistically similar to those that would have found by a full MonteCarlo simulation. In addition to the reconstructed particles themselves, descriptions of the calorimeter hit clusters and tracks that these particles would have produced are also included in the LCIO output. These output files can then be put through various analysis codes in order to characterize the effectiveness of a hypothetical detector at extracting relevant physical information about an event. Such a tool is extremely useful in preliminary detector research and development, as full simulations are extremely cumbersome and taxing on processor resources; a fast, efficient MonteCarlo can facilitate and even make possible detector physics studies that would be very impractical with the full simulation by sacrificing what is in many cases inappropriate attention to detail for valuable gains in time required for results.

Advances at SR sources in the generation of nanofocused beams with a high degree of transverse coherence call for effective techniques to simulate the propagation of partially coherent X-ray beams through complex optical systems in order to characterize how coherence properties such as the mutual coherence function (MCF) are propagated to the exit plane. Here we present an approach based on MonteCarlo sampling of the Green function. A Gauss-Shell Stochastic Source with arbitrary spatial coherence is synthesized by means of the Gaussian copula statistical tool. The Green function is obtained by sampling Huygens-Fresnel waves with MonteCarlo methods and is used to propagate each source realization to the detector plane. The sampling is implemented with a modified MonteCarlo ray tracing scheme where the optical path of each generated ray is stored. Such information is then used in the summation of the generated rays at the observation plane to account for coherence properties. This approach is used to simulate simple models of propagation in free space and with reflective and refractive optics.

This report describes the MCV (MonteCarlo - Vectorized)MonteCarlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for MonteCarlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various

T. M. Sutton; F. B. Brown; F. G. Bischoff; D. B. MacMillan; C. L. Ellis; J. T. Ward; C. T. Ballinger; D. J. Kelly; L. Schindler

MonteCarlo techniques were employed to evaluate the point spread function (PSF) of scattered radiation in diagnostic radiology. The MonteCarlo procedure is described and shown to compare well with MonteCarlo scatter analysis of other authors. The intensity and distribution of the PSF are described independently. The effects of object thickness, air gap, and beam spectra are examined. An

Four key components with regards to MonteCarlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate MonteCarlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated MonteCarlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible

We analyze here in some detail, the derivation of the Particle and MonteCarlo methods of plasma simulation, such as Particle in Cell (PIC), MonteCarlo (MC) and Particle in Cell \\/ MonteCarlo (PIC\\/MC) from formal manipulation of transport equations.

The use of MonteCarlo modeling for short-period comets is discussed. Comparative time evolutions of the exact versus MonteCarlo mappings are presented. It is shown that the MonteCarlo method should be restricted to fully chaotic regimes where parasitic diffusion is insignificant.

MonteCarlo simulation of radiation transport is considered to be one of the most accurate methods of radiation therapy dose calculation. With the rapid development of computer technology, MonteCarlo based treatment planning for radiation therapy is becoming practical. A basic requirement for MonteCarlo treatment planning is a detailed knowledge of the radiation beams from medical accelerators. A practical

A MonteCarlo learning and biasing technique that does its learning and biasing in the random number space rather than the physical phase space is described. The technique is probably applicable to all linear MonteCarlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple MonteCarlo transport problem. Problems encountered, problems solved, and

MonteCarlo integration is often used for antialiasing in rendering processes. Due to low sampling rates only expected error estimates can be stated, and the variance can be high. In this article quasi-MonteCarlo methods are presented, achieving a guaranteed upper error bound and a convergence rate essentially as fast as usual MonteCarlo.

Super-MonteCarlo (SMC) is a method of dose calculation for radiotherapy which combines both analytical calculations and MonteCarlo electron transport. Analytical calculations are used where possible, such as the determination of photon interaction density, to decrease computation time. A MonteCarlo method is used for the electron transport in order to obtain high accuracy of results. To further speed

MonteCarlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel MonteCarlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley-Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

In this paper, we describe two types of effective events for describing heat transfer in a kinetic MonteCarlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of ``thermal bits'' between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.

Purpose: To develop an infrastructure for the integrated MonteCarlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka (Japan); Takegawa, Hidek [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Higashinari-ku, Osaka (Japan); Yamamoto, Tokihiro; Numasaki, Hodaka [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka (Japan); Teshima, Teruki [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka (Japan)], E-mail: teshima@sahs.med.osaka-u.ac.jp

This paper presents a study on the solution of the bi-variate population balance equation (PBE) in batch particulate processes using stochasticMonteCarlo (MC) simulations. Numerical simulations were carried out over a wide range of variation of particle aggregation and growth rate models. The PBE was numerically solved in terms of the number density function, n(V,x,t), for the prediction of

Current techniques for assessing the benefits of certain anthropogenic emission reductions are largely influenced by limitations in emissions data and atmospheric modeling capability and by the highly variant nature of meteorology. These data and modeling limitations are likely to continue for the foreseeable future, during which time important strategic decisions need to be made. Statistical atmospheric quality data and apportionment techniques are used in Monte-Carlo models to offset serious shortfalls in emissions, entrainment, topography, statistical meteorology data and atmospheric modeling. This paper describes the evolution of Department of Energy (DOE) Monte-Carlo based assessment models and the development of statistical inputs. A companion paper describes techniques which are used to develop the apportionment factors used in the assessment models.

McCART is a numerical procedure to solve the radiative transfer equation for light propagation through the atmosphere especially developed to study the effect of the atmosphere on the response of hyperspectral sensors for remote sensing of the earth's surface. McCART is based on a single MonteCarlo simulation run for a reference layered plane non-absorbing atmosphere and a plane ground with uniform reflectance. The spectral response of the sensor for a given distribution of ground reflectance and for a specific profile of scattering and absorption properties of the atmosphere is obtained in a short time from the results of the MonteCarlo simulation, making use of scaling relationships and of symmetry properties. The response includes the effects of adjacent pixels. The results can be used to establish the limits of applicability of approximate algorithms for the processing and analysis of hyperspectral images. The algorithm can be also used to develop procedures of atmospheric compensation.

Nardino, Vanni; Del Bianco, Samuele; Martelli, Fabrizio; Guzzi, Donatella; Marcoionni, Paolo; Pippi, Ivan; Bruscaglioni, Piero; Zaccanti, Giovanni

Diffusion MonteCarlo (DMC) and Green's Function MonteCarlo (GFMC) algorithms were implemented to obtain numerical approximations for the ground state energies of systems of bosons in a harmonic trap potential. Gaussian pairwise particle interactions of the form V0e^-|ri-rj|^2/r0^2 were implemented in the DMC code. These results were verified for small values of V0 via a first-order perturbation theory approximation for which the N-particle matrix element evaluated to N2 V0(1 + 1/r0^2)^3/2. By obtaining the scattering length from the 2-body potential in the perturbative regime (V0? 1), ground state energy results were compared to modern renormalized models by P.R. Johnson et. al, New J. Phys. 11, 093022 (2009).

We have developed a full MonteCarlo simulation code to evaluate the electric signals produced by ionizing particles crossing a silicon strip detector (SSD). This simulation can be applied in the design stage of a SSD, to optimize the detector parameters. All the physical processes leading to the generation of electron-hole (e-h) pairs in silicon have been taken into account. Induced current signals on the readout strips are evaluated applying the Shockley-Ramo's theorem to the charge carriers propagating inside the detector volume. A simulation of the readout electronics has been also implemented. The MonteCarlo results have been compared with experimental data taken with a 400 ?m thick SSD.

Brigida, M.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giglietto, N.; Giordano, F.; Loparco, F.; Marangelli, B.; Mazziotta, M. N.; Mirizzi, N.; Rainò, S.; Spinelli, P.

Strongly-coupled fermionic systems can support a variety of low-energy phenomena, giving rise to collective condensation, symmetry breaking and a rich phase structure. We explore the potential of worldline MonteCarlo methods for analyzing the effective action of fermionic systems at large flavor number Nf, using the Gross-Neveu model as an example. Since the worldline MonteCarlo approach does not require a discretized spacetime, fermion doubling problems are absent, and chiral symmetry can manifestly be maintained. As a particular advantage, fluctuations in general inhomogeneous condensates can conveniently be dealt with analytically or numerically, while the renormalization can always be uniquely performed analytically. We also critically examine the limitations of a straightforward implementation of the algorithms, identifying potential convergence problems in the presence of fermionic zero modes as well as in the high-density region.

Dunne, Gerald; Gies, Holger; Klingmüller, Klaus; Langfeld, Kurt

A test phantom, including a wide range of mammographic tissue equivalent materials and test details, was imaged on a digital mammographic system. In order to quantify the effect of scatter on the contrast obtained for the test details, calculations of the scatter-to-primary ratio (S/P) have been made using a MonteCarlo simulation of the digital mammographic imaging chain, grid and test phantom. The results show that the S/P values corresponding to the imaging conditions used were in the range 0.084-0.126. Calculated and measured pixel values in different regions of the image were compared as a validation of the model and showed excellent agreement. The results indicate the potential of MonteCarlo methods in the image quality-patient dose process optimisation, especially in the assessment of imaging conditions not available on standard mammographic units. PMID:15933151

Hunt, R A; Dance, D R; Pachoud, M; Alm Carlsson, G; Sandborg, M; Ullman, G; Verdun, F R

The amounts of change in the variance and in the efficiency of nonanalog MonteCarlo simulations for certain variations in the biasing parameters are important quantities when optimizing such simulations. A new approach, based on the differential operator sampling technique, is outlined to estimate the derivatives of variance and efficiency with respect to the biasing parameters; the same simulation constructed to solve the primary problem is used. An algorithm requiring the first- and higher order derivatives of the natural logarithm of the second moment to predict minimum-variance-biasing parameters is presented. Equations pertaining to the algorithm are derived and solved numerically for an exponentially transformed one-group slab transmission problem for various slab thicknesses and scattering probabilities. The results indicate that optimization of nonanalog simulations can be achieved so that the present method will be useful in self-learning MonteCarlo schemes.

Sarkar, P.K.; Rief, H. [Commission of the European Communities, Ispra (Italy). Ispra Establishment

The MonteCarlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular MonteCarlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback.

Liu, L. [Computalog USA, Fort Worth, TX (United States); Gardner, R.P. [North Carolina State Univ., Raleigh, NC (United States)

MonteCarlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the MonteCarlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum.

Generalized MonteCarlo titration (GMCT) is a versatile suite of computer programs for the efficient simulation of complex macromolecular receptor systems as for example proteins. The computational model of the system is based on a microstate description of the receptor and an average description of its surroundings in terms of chemical potentials. The receptor can be modeled in great detail including conformational flexibility and many binding sites with multiple different forms that can bind different ligand types. Membrane embedded systems can be modeled including electrochemical potential gradients. Overall properties of the receptor as well as properties of individual sites can be studied with a variety of different MonteCarlo (MC) simulation methods. Metropolis MC, Wang-Landau MC and efficient free energy calculation methods are included. GMCT is distributed as free open source software at www.bisb.uni-bayreuth.de under the terms of the GNU Affero General Public License. PMID:22278916

We introduce a many-body method that combines two powerful many-body techniques, viz., quantum MonteCarlo and coupled cluster theory. Coupled cluster wave functions are introduced as importance functions in a MonteCarlo method designed for the configuration interaction framework to provide rigorous upper bounds to the ground-state energy. We benchmark our method on the homogeneous electron gas in momentum space. The importance function used is the coupled cluster doubles wave function. We show that the computational resources required in our method scale polynomially with system size. Our energy upper bounds are in very good agreement with previous calculations of similar accuracy, and they can be systematically improved by including higher order excitations in the coupled cluster wave function.

Roggero, Alessandro; Mukherjee, Abhishek; Pederiva, Francesco

We discuss recent work with the diffusion quantum MonteCarlo (QMC) method in its application to molecular systems. The formal correspondence of the imaginary time Schroedinger equation to a diffusion equation allows one to calculate quantum mechanical expectation values as MonteCarlo averages over an ensemble of random walks. We report work on atomic and molecular total energies, as well as properties including electron affinities, binding energies, reaction barriers, and moments of the electronic charge distribution. A brief discussion is given on how standard QMC must be modified for calculating properties. Calculated energies and properties are presented for a number of molecular systems, including He, F, F , H2, N, and N2. Recent progress in extending the basic QMC approach to the calculation of ''analytic'' (as opposed to finite-difference) derivatives of the energy is presented, together with an H2 potential-energy curve obtained using analytic derivatives. 39 refs., 1 fig., 2 tabs.

Reynolds, P.J.; Barnett, R.N.; Hammond, B.L.; Lester, W.A. Jr.

Knowledge of the microscopic distribution of interactions in irradiated matter is of fundamental importance for a mechanistic understanding of subsequent effects. This may be obtained by MonteCarlo codes which simulate event-by-event the transport of charged particles in matter. The development of such codes necessitates accurate interaction cross-sections for all the important collision processes. A semi-theoretical formalism has been developed and implemented in a MonteCarlo code which fairly accurately predicts energy-loss spectra for charged particle impact on water molecules. The extension of the formalism for establishing the necessary cross-sections for liquid/solid water (i.e. more realistic biomatter) is discussed and preliminary results are presented. PMID:11770524

Calculation of the bound states of heavy quark systems by a Hamiltonian formulation based on an expansion of the interaction into inverse powers of the quark mass is discussed. The potentials for the spin-orbit and spin-spin coupling between quark and antiquark, which are responsible for the fine and hyperfine splittings in heavy quark spectroscopy, are expressed as expectation values of Wilson loop factors with suitable insertions of chromomagnetic or chromoelectric fields. A MonteCarlo simulation has been used to evaluate the expectation values and, from them, the spin-dependent potentials. The MonteCarlo calculation is reported to show a long-range, non-perturbative component in the interaction. (LEW)

A multithreaded MonteCarlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the MonteCarlo calculations for all but the highest densities studied. PMID:22239788

A gamma ray scanning gauge was simulated with MonteCarlo to study the properties of gamma scanning gauges and to resolve the counts coming from a {sup 235}U source from those coming from a contaminant ({sup 232}U) whose daughters emit high energy gamma rays. The simulation has been used to infer the amount of the {sup 232}U contaminant in a {sup 235}U source to select the best size for the NaI(Tl) detector crystal to minimize the effect of the contaminant. The results demonstrate that MonteCarlo simulation provides a systematic tool for designing a gauge with desired properties and for estimating properties of the gamma source from measured count rates.

MonteCarlo calculations have been carried out to obtain the x-ray spectra of various target-filter combinations for a mammography unit. Mammography is widely used to diagnose breast cancer. Further to Mo target with Mo filter combination, Rh/Rh, Mo/Rh, Mo/Al, Rh/Al, and W/Rh are also utilized. In this work MonteCarlo calculations, using MCNP 4C code, were carried out to estimate the x-ray spectra produced when a beam of 28 keV electrons did collide with Mo, Rh and W targets. Resulting x-ray spectra show characteristic x-rays and continuous bremsstrahlung. Spectra were also calculated including filters.

Vega-Carrillo, H. R.; Gonzalez, J. Ramirez; Manzanares-Acuna, E.; Hernandez-Davila, V. M.; Villasana, R. Hernandez; Mercado, G. A. [Universidad Autonoma de Zacatecas Apdo. Postal 336, 98000 Zacatecas, Zac. Mexico (Mexico)

Nuclear Matter and light nuclei have been successfully calculated using the Auxiliary field diffusion MonteCarlo method and a truncated two-body potential without spin-orbit interactions, the Argonne v6' potential. In order to have realistic calculations where the remaining parts of the interaction can be treated perturbatively, the Argonne v8' potential is commonly sampled in MonteCarlo methods. Here we will show that by using a pair-wise break up of the Hamiltonian, along with additional auxiliary fields, that the additional spin-orbit and isospin dependent spin-orbit terms can be sampled. We will discuss calculations of the nuclear matter equation of state using this method. This work was support by NSF grant PHY-1067777.

Structural and thermal properties of small lithium clusters are studied using ab initio-based MonteCarlo simulations. The ab initio scheme uses a Hartree-Fock/density functional treatment of the electronic structure combined with a jump-walking MonteCarlo sampling of nuclear configurations. Structural forms of Li{sub 8} and Li{sub 9}{sup +} clusters are obtained and their thermal properties analyzed in terms of probability distributions of the cluster potential energy, average potential energy and configurational heat capacity all considered as a function of the cluster temperature. Details of the gradual evolution with temperature of the structural forms sampled are examined. Temperatures characterizing the onset of structural changes and isomer coexistence are identified for both clusters.

MonteCarlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of MonteCarlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu

. \\u000a In this paper we study a recent generalization of the XY-model in\\u000a two dimensions by using MonteCarlo method. The vortex density,\\u000a specific heat, energy and critical temperature are obtained. Some\\u000a results are compared with approximated analytical calculations. The\\u000a nature of the phase transition as the generalization parameter\\u000a varies is discussed.

L. A. S. Mól; A. R. Pereira; H. Chamati; S. Romano

The worldline quantum MonteCarlo method is a computational technique for studying the properties of many-electron and quantum-spin systems. In this paper, we describe our efforts in developing an efficient implementation of this method for the massively-parallel Connection Machine CM-2. We discuss why one must look beyond the obvious parallelism in the method in order to reduce interprocessor communication and

The physical algorithms implemented in the latest release of the general-purpose MonteCarlo code penelope for the simulation of coupled electron–photon transport are briefly described. We discuss the mixed (class II) scheme used to transport intermediate- and high-energy electrons and positrons and, in particular, the approximations adopted to account for the energy dependence of the interaction cross-sections. The reliability of

J. Sempau; J. M. Fernández-Varea; E. Acosta; F. Salvat

The paper describes facilities and computation methods of the new RTS&T MonteCarlo code. This code performs simulations of three dimensional electromagnetic shower development and low energy neutron production and transport in accelerator and in shielding components with a calculation of the isotope transmutation problem. RTS&T is based on a compilation from ENDF\\/B-VI, JENDL-3, EAF, FENDL and EPNDL evaluated data

A. I. Blokhiny; I. I. Degtyarev; A. E. Lokhovitskii; M. A. Maslov; I. A. Yazynin

This paper describes tunes of QCD Monte-Carlos for the LHC. It gives an overview over unconstrained model parameters relevant for the LHC and the measured observables used to constrain them. The most commonly used tunes to these observables are described and the remaining model uncertainty is addressed. The tuned MC models are validated against a large variety of data to check their universality.

We review the basic principles of quasi-MonteCarlo (QMC) methods, the randomizations that turn them into variance-reduction\\u000a techniques, the integration error and variance bounds obtained in terms of QMC point set discrepancy and variation of the\\u000a integrand, and the main classes of point set constructions: lattice rules, digital nets, and permutations in different bases.\\u000a QMC methods are designed to estimate

We have implemented a MonteCarlo simulation of fission fragment statistical decay by sequential neutron emission. Within this approach, we calculate the center-of-mass and laboratory prompt neutron energy spectra as a function of the mass of fission fragments and integrated over the whole mass distribution. We also assess the prompt neutron multiplicity distribution P(nu), both the average number of emitted

S. Lemaire; P. Talou; T. Kawano; M. B. Chadwick; D. G. Madland

Diffusion quantum MonteCarlo (DMC) calculations for transition metal (M) porphyrin complexes (MPo, M=Ni,Cu,Zn) are reported. We calculate the binding energies of the transition metal atoms to the porphin molecule. Our DMC results are in reasonable agreement with those obtained from density functional theory calculations using the B3LYP hybrid exchange-correlation functional. Our study shows that such calculations are feasible with the DMC method.

Koseki, Jun; Maezono, Ryo; Tachikawa, Masanori; Towler, M. D.; Needs, R. J.

A Monte-Carlo simulation was done to optimise the setup of a new type of time-of-flight spectrometer, a thermal neutron Brillouin scattering (NBS) spectrometer. After collimation of the incident white neutron beam five incident energies between 20 and 138 meV can be obtained from four different monochromator crystal faces. The monochromatic beam is split into nine pencil-like separate beams to improve

This thesis presents the results of recent investigations in the field of manganites using unbiased MonteCarlo techniques. It was found that the famous colossal magnetoresistance effect in the one-orbital model stems from the competition between charge-ordered insulating and ferromagnetic metallic states, which was also observed experimentally. The multiferroicity found in the E-phase of orthorombic manganites is also discussed, as

Multiferroic Bismuth Ferrite (BiFeO3) exhibits both ferroelectricity and antiferromagnetism, possibly enabling a connection between the two effects in the same material. While its antiferromagnetic character is relatively well-understood, experimental measurements of the spontaneous polarization vary significantly over two orders of magnitude, from 0.06 C\\/m^2 to 1.50 C\\/m^2. We cary out accurate quantum MonteCarlo calculations to estimate the cohesion energy

The quality assurance of stereotactic radiotherapy and radiosurgery treatments requires the use of small-field dose measurements that can be experimentally challenging. This study used MonteCarlo simulations to establish that PAGAT dosimetry gel can be used to provide accurate, high-resolution, three-dimensional dose measurements of stereotactic radiotherapy fields. A small cylindrical container (4 cm height, 4.2 cm diameter) was filled with PAGAT gel, placed in the parietal region inside a CIRS head phantom and irradiated with a 12-field stereotactic radiotherapy plan. The resulting three-dimensional dose measurement was read out using an optical CT scanner and compared with the treatment planning prediction of the dose delivered to the gel during the treatment. A BEAMnrc/DOSXYZnrc simulation of this treatment was completed, to provide a standard against which the accuracy of the gel measurement could be gauged. The three-dimensional dose distributions obtained from MonteCarlo and from the gel measurement were found to be in better agreement with each other than with the dose distribution provided by the treatment planning system's pencil beam calculation. Both sets of data showed close agreement with the treatment planning system's dose distribution through the centre of the irradiated volume and substantial disagreement with the treatment planning system at the penumbrae. The MonteCarlo calculations and gel measurements both indicated that the treated volume was up to 3 mm narrower, with steeper penumbrae and more variable out-of-field dose, than predicted by the treatment planning system. The MonteCarlo simulations allowed the accuracy of the PAGAT gel dosimeter to be verified in this case, allowing PAGAT gel to be utilized in the measurement of dose from stereotactic and other radiotherapy treatments, with greater confidence in the future. PMID:22572565

Kairn, T; Taylor, M L; Crowe, S B; Dunn, L; Franich, R D; Kenny, J; Knight, R T; Trapp, J V

The quality assurance of stereotactic radiotherapy and radiosurgery treatments requires the use of small-field dose measurements that can be experimentally challenging. This study used MonteCarlo simulations to establish that PAGAT dosimetry gel can be used to provide accurate, high-resolution, three-dimensional dose measurements of stereotactic radiotherapy fields. A small cylindrical container (4 cm height, 4.2 cm diameter) was filled with PAGAT gel, placed in the parietal region inside a CIRS head phantom and irradiated with a 12-field stereotactic radiotherapy plan. The resulting three-dimensional dose measurement was read out using an optical CT scanner and compared with the treatment planning prediction of the dose delivered to the gel during the treatment. A BEAMnrc/DOSXYZnrc simulation of this treatment was completed, to provide a standard against which the accuracy of the gel measurement could be gauged. The three-dimensional dose distributions obtained from MonteCarlo and from the gel measurement were found to be in better agreement with each other than with the dose distribution provided by the treatment planning system's pencil beam calculation. Both sets of data showed close agreement with the treatment planning system's dose distribution through the centre of the irradiated volume and substantial disagreement with the treatment planning system at the penumbrae. The MonteCarlo calculations and gel measurements both indicated that the treated volume was up to 3 mm narrower, with steeper penumbrae and more variable out-of-field dose, than predicted by the treatment planning system. The MonteCarlo simulations allowed the accuracy of the PAGAT gel dosimeter to be verified in this case, allowing PAGAT gel to be utilized in the measurement of dose from stereotactic and other radiotherapy treatments, with greater confidence in the future. Experimental aspects of this work were originally presented at the Engineering and Physical Sciences in Medicine Conference (EPSM-ABEC), Melbourne, 2010.

Kairn, T.; Taylor, M. L.; Crowe, S. B.; Dunn, L.; Franich, R. D.; Kenny, J.; Knight, R. T.; Trapp, J. V.

MonteCarlo simulations are performed on the antiferromagnetic fcc Ising model which is relevant to the binary alloy CuAu. The model exhibits a first-order ordering transition as a function of temperature. The lattice free energy of the model is determined for all temperatures. By matching free energies of the ordered and disordered phases, the transition temperature is determined to be T(,t) = 1.736 J where J is the coupling constant of the model. The free energy as determined by series expansion and the Kikuchi cluster variation method is compared with the MonteCarlo results. These methods work well for the ordered phase, but not for the disordered phase. A determination of the pair correlation in the disordered phase along the {100} direction indicates a correlation length of (DBLTURN) 2.5a at the phase transition. The correlation length exhibits mean-field-like temperature dependence. The Cowley-Warren short range order parameters are determined as a function of temperature for the first twelve nearest-neighbor shells of this model. The MonteCarlo results are used to determine the free parameter in a mean-field-like class of theories described by Clapp and Moss. The ability of these theories to predict ratios between pair potentials is tested with these results. In addition, evidence of a region of heterophase fluctuations is presented in agreement with x-ray diffuse scattering measurements on Cu(,3)Au. The growth of order following a rapid quench from disorder is studied by means of a dynamic MonteCarlo simulation. The results compare favorably with the Landau theory proposed by Chan for temperatures near the first-order phase transition. For lower temperatures, the results are in agreement with the theories of Lifshitz and Allen and Chan. In the intermediate temperature range, our extension of Chan's theory is able to explain our simulation results and recent experimental results.

Computed tomography (CT) images of patients with hip prostheses are severely degraded by metal streaking artefacts. The low image quality makes organ contouring more difficult and can result in large dose calculation errors when MonteCarlo (MC) techniques are used. In this work, the extent of streaking artefacts produced by three common hip prosthesis materials (Ti-alloy, stainless steel, and Co-Cr-Mo

M. Bazalova; C. Coolens; F. Cury; P. Childs; L. Beaulieu; F. Verhaegen

We study the diffusion process via a vacancy mechanism in an A-B binary alloy with B2 order. The starting point of our MonteCarlo simulations were experiments done recently by Mössbauer spectroscopy and nuclear resonant scattering on stoichiometric B2 ordered FeAl, which yielded a nonobvious jump model for the Fe atoms, namely a priority of effective jumps to third-nearest-neighbor sites

A part of a long DNA chain was driven into a confined environment by an electric field, while the rest remains in the higher-entropy region. Upon removal of the field, the chain recoils to the higher-entropy region spontaneously. This dynamical process was investigated by MonteCarlo simulations. The simulation reproduces the experimentally-observed phenomenon that the recoil of the DNA chain

Yong-jun Xie; Hong-tao Yu; Hai-yang Yang; Yao Wang; Xing-yuan Zhang; Qin-wei Shi

In this review, we describe applications of the pruned-enriched Rosenbluth method (PERM), a sequential MonteCarlo algorithm\\u000a with resampling, to various problems in polymer physics. PERM produces samples according to any given prescribed weight distribution,\\u000a by growing configurations step by step with controlled bias, and correcting “bad” configurations by “population control”.\\u000a The latter is implemented, in contrast to other population

A detailed description of photofission process at intermediate energies (200 to 1000 MeV) is presented. The study of the reaction is performed by a MonteCarlo method which allows the investigation of properties of residual nuclei and fissioning nuclei. The information obtained indicate that multifragmentation is negligible at the photon energies studied here, and that the symmetrical fission is dominant. Energy and mass distributions of residual and fissioning nuclei were calculated.

Andrade-II, E.; Freitas, E.; Garcia, F. [Universidade Estadual de Santa Cruz-UESC, CEP 45662-000, Ilheus (Brazil); Tavares, O. A. P.; Duarte, S. B. [Centro Brasileiro de Pesquisas Fisicas-CBPF/MCT, 22290-180, Rio de Janeiro (Brazil); Deppman, A. [Instituto de Fisica da Universidade de Sao Paulo, P.O. Box 66318, CEP 05315-970, Sao Paulo (Brazil)

The present MonteCarlo simulation of the Oort comet's evolution over between 10,000 and 40,000 AU encompasses both Galactic tidal field and passing star perturbations. Over 40 million comets are found to be generated for 250 million years, while the figure for 4.5 billion years is 1.5 million comets. Over the age of the solar system, it is projected that

There is a very large number of small bodies in the Solar System and their orbits are varied and complicated. Some types of\\u000a orbits and events are so rare that they occur in numerical simulations only when millions or billions of orbits have been\\u000a calculated. In order to study these orbits or events an efficient MonteCarlo simulation is useful.

M. J. Valtonen; J. Q. Zheng; S. Mikkola; P. Nurmi; H. Rickman

We propose a method to construct a proposal density for the Metropolis-Hastings algorithm in Markov Chain MonteCarlo (MCMC) simulations of the GARCH model. The proposal density is constructed adaptively by using the data sampled by the MCMC method itself. It turns out that autocorrelations between the data generated with our adaptive proposal density are greatly reduced. Thus it is concluded that the adaptive construction method is very efficient and works well for the MCMC simulations of the GARCH model.

The common variables in both TRIM and EVOLVE MonteCarlo codes were identified and parameterized by grouping them into specific ranges of interest and simple statistical tests were performed on them. In comparing the range distributions, the most significant differences between the two codes were found in the selection procedures and the ranges of the impact parameter and path length. Also at: Department of Physics, Georgetown University, Washington, DC 20057, USA.

We investigate the one-orbital model for manganites with cooperative phonons and superexchange coupling JAF via large-scale MonteCarlo (MC) simulations. Results for two-orbitals are also briefly discussed. Focusing on an the realistic electronic density n=0.75, a regime of competition between ferromagnetic (FM) metallic and charge-ordered (CO) insulating states was identified in the finite temperature phase diagram. In the vicinity of

Growth modes of thin films on a solid surface are studied by using a Solid-On-Solid model. MonteCarlo technique is used to simulate adsorption, desorption, and diffusion processes of the solid-vapour interface. The emphasis is on the Stranski-Krastanov growth mode which is a combination of Frank-van der Merve and Volmer-Weber growth modes. Inclusion of an anisotropy factor into the attractive

A three-dimensional MonteCarlo model has been developed for accurate boiling water reactor (BWR) neutron and gamma fluence calculations using continuous-energy MCNP. Unlike earlier light water reactor models being run in the fixed-source mode with simplified geometries, this model is run in the criticality mode with actual geometry and material composition. The MCNP fast flux shapes in the downcomer region are qualified against measurements and are compared with results from two-dimensional deterministic DORT calculations.

Sitaraman, S.; Rogers, D.R. [General Electric Co., San Jose, CA (United States); Kruger, R.M. [General Electric Co., Pleasanton, CA (United States)

MonteCarlo model of SiOx layers annealing and growth was developed. Model layers of stoichiometric SiO2 with initially randomly distributed components on partially filled diamond-like lattice tends to SiO4 tetrahedron formation during high-temperature annealing. Chains of tetrahedrons are found to be connected by oxygen atom. Portion of properly coordinated atoms was up to 70%. Presence of excess silicon in SiOx

N. A. GIadkih; Nataliya L. Shwartz; ZoyaSh. Yanovitskaja; A. V. Zverev

We investigate a model of a two-dimensional system of charged particles and vacancies. The particles interact with isotropic forces, attractive or repulsive, to nearest and next-nearest neighbors, and could move through the lattice. Using MonteCarlo simulations we determine mean-square displacement as a function of time, temperature, coverage, and range of interactions. We estimate also the tracer diffusion coefficient and

There are two versions of the MORSE multigroup MonteCarlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having

Execution time for the Integrated TIGER Series (ITS) MonteCarlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines

We describe a Markov-Chain Monte-Carlo process using a Metropolis algorithm to derive Differential Emission Measures (DEMs) and element abundances from EUV line fluxes. This technique allows us to relax the smoothness constraint generally imposed on DEMs and also to determine confidence bounds on the computed values. We apply this method to solar spectral line data from SERTS (Solar EUV Rocket Telescope and Spectrograph).

A MonteCarlo method for solving highly angle dependent streaming problems is described. The method uses a DXTRAN-like angle biasing scheme, a space-angle weight window to reduce weight fluctuations introduced by the angle biasing, and a space-angle importance generator to set parameters for the space-angle weight window. Particle leakage through a doubly-bent duct is calculated to demonstrate the method's use.

Iron-chromium alloys are characterised by a complex phase diagram, the small negative heat of formation at low Cr concentrations\\u000a in bcc ?-structure of Fe and by the inversion of short-range order parameter. We present MonteCarlo simulations of Fe-Cr alloy based\\u000a on cluster expansion (CE) approximation for the enthalpy of the system. The set of cluster expansion coefficients is validated

Mikhail Yu. Lavrentiev; Duc Nguyen-Manh; Ralf Drautz; Peter Klaver; Sergei L. Dudarev

We study lattice g0phi4 field theory for all g0 and fixed renormalized mass M in one and two dimensions using MonteCarlo techniques. We calculate the dimensionless renormalized coupling constant gR = gR\\/M4-d, where d is the dimension of space-time, at fixed small values of the lattice spacing a for various g0 and lattice sizes. Our results are in quantitative

The components of the neutral- and plasma-surface interaction model used in the MonteCarlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

A MonteCarlo simulation method is presented for calculating line shapes for charged-particle analyzers with cylindrical symmetry. Either isotropic or cosine angular distributions of charged-particle emission can be simulated. Application of this technique is demonstrated by simulation of the line shape exhibited by the Helmer planar-retarding-grid analyzer. Ray tracing is used to determine the origin of line-shape asymmetry, new entrance

A continuous random walk procedure for solving some elliptic partial differential equations at a single point is generalized to estimate the solution everywhere. The MonteCarlo method described here is exact (except at the boundary) in the sense that the only error is the statistical sampling error that tends to zero as the sample size increases. A method to estimate the error introduced at the boundary is provided so that the boundary error can always be made less than the statistical error.

Quantum MonteCarlo (QMC) calculations using the variational (VMC) and diffusion (DMC) methods is performed on NiO crystal with gaussian basis and pseudo potentials. We report calculations of energy-volume plots obtained by VMC and DMC with 2*2*2 (16 ions) and 4*4*4 (128 ions) simulation cells. In order to obtain smooth dependences careful optimizations of the basis set at each lattice

We present the MonteCarlo simulation program MCSEM, developed at the Physikalisch-Technische Bundesanstalt (PTB), Germany, for the simulation of Scanning Electron Microscopy (SEM) image formation at arbitrary specimen structures (e.g. layout structures of wafers or photomasks). The program simulates the different stages of the SEM image formation process: the probe forming, the probe-sample interaction and the detection process. A modular

The requirements for the Near Infrared Camera (NIRCam) instrument for NASA's James Webb Space Telescope (JWST) specify that the instrument be aligned and operate at cryogenic temperatures. The error budget for the integration of the optical components was analyzed using a multi-parameter MonteCarlo simulation with compensators. Results from these simulations were used to revise the alignment process and error budget. This paper presents an overview of this analysis.

After some general remarks on the efficiency of various MonteCarlo algorithms for gauge theories, the calculation of the asymptotic freedom scales of SU(2) and SU(3) gauge theories in the absence of quarks was discussed. There are large numerical factors between these scales when defined in terms of the bare coupling of the lattice theory or when defined in terms of the physical force between external sources.

This paper describes the MonteCarlo simulation developed specifically for the VCS experiments below pion threshold that have been performed at MAMI and JLab. This simulation generates events according to the (Bethe-Heitler + Born) cross section behavior and takes into account all relevant resolution-deteriorating effects. It determines the 'effective' solid angle for the various experimental settings which are used for the precise determination of photon electroproduction absolute cross section.

P. Janssens; L. Van Hoorebeke; H. Fonvieille; N. D'Hose; P.Y. Bertin; I. Bensafa; N. Degrande; M. Distler; R. Di Salvo; L. Doria; J.M. Friedrich; J.Friedrich; Ch. Hyde-Wright; S. Jaminion; S. Kerhoas; G. Laveissiere; D. Lhuillier; D. Marchand; H. Merkel; J. Roche; G. Tamas; M. Vanderhaeghen; R. Van de Vyver; J. Van de Wiele; Th. Walcher

The purpose of the study was to construct flexible and convenient routines for MonteCarlo simulation of electromagnetic cascade development in media with different Z, wide E(o)\\/E interval, including the ultrahigh energy region where the Landau-Pomerantchuk-Migdal effect is taken into account. Messel's method is used to realize a model of electron-photon showers. This method is employed with the following processes

Over the years, the International Commission on Radiological Protection (ICRP) and other organisations have formulated recommendations regarding uncertainty in occupational dosimetry. The most practical and widely accepted recommendations are the trumpet curves. To check whether routine dosemeters comply with them, a Technical Report on uncertainties issued by the International Electrotechnical Commission (IEC) can be used. In this report, the analytical method is applied to assess the uncertainty of a dosemeter fulfilling an IEC standard. On the other hand, the MonteCarlo method can be used to assess the uncertainty. In this work, a direct comparison of the analytical and the MonteCarlo methods is performed using the same input data. It turns out that the analytical method generally overestimates the uncertainty by about 10-30 %. Therefore, the results often do not comply with the recommendations of the ICRP regarding uncertainty. The results of the more realistic uncertainty evaluation using the MonteCarlo method usually comply with the recommendations of the ICRP. This is confirmed by results seen in regular tests in Germany. PMID:19942627

MonteCarlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our MonteCarlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the MonteCarlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

Nuclear superconductivity is a central part of quantum many-body dynamics. In mesoscopic systems such as atomic nuclei, this phenomenon is influenced by shell effects, mean-field deformation, particle decay, and by other collective and chaotic components of nucleon motion. The ability to find an exact solution to these pairing correlations is of particular importance. In this presentation we develop and investigate the effectiveness of different methods of attacking the nucleon pairing problem in nuclei. In particular, we concentrate on the MonteCarlo approach. We review the configuration space MonteCarlo techniques, the Suzuki-Trotter breakup of the time evolution operator, and treatment of the pairing problem with non-constant matrix elements. The quasi-spin symmetry allows for a mapping of the pairing problem onto a problem of interacting spins which in turn can be solved using a MonteCarlo approach. The algorithms are investigated for convergence to the true ground state of model systems and calculated ground state energies are compared to those found by an exact diagonalization method. The possibility to include other non-pairing interaction components of the Hamiltonian is also investigated.

The concept of Virtual MonteCarlo (VMC) allows to use different MonteCarlo programs to simulate particle physics detectors without changing the geometry definition and the detector response simulation. In this context, to study the reconstruction capabilities of a detector, the availability of a tool to extrapolate the track parameters and their associated errors due to magnetic field, straggling in energy loss and Coulomb multiple scattering plays a central role: GEANE is an old program written in Fortran 15 years ago that performs this task through dense materials and that is still succesfully used by many modern experiments in its native form. Among its features there is the capability to read directly the geometry and the magnetic field map from the simulation and to use different track representations. In this work we have 'rediscovered' GEANE in the context of the Virtual MonteCarlo: we will show how GEANE has been integrated in the FairROOT framework, firmly based on the VMC, by keeping the old features in the new ROOT geometry modeler. Moreover new features have been added to GEANE that allow one to use it also for low density materials, i.e. for gaseous detectors, and preliminary results will be shown and discussed. The tool is now used by the PANDA and CBM collaborations at GSI as the first step for the global reconstruction algorithms, based on a Kalman filter which is currently under development.

Fontana, A.; Genova, P.; Lavezzi, L.; Panzarasa, A.; Rotondi, A.; A-Turany, M.; Bertini, D.

The number of tallies performed in a given MonteCarlo calculation is limited in most modern MonteCarlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid MonteCarlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.

Mervin, Brenden T [ORNL; Maldonado, G. Ivan [University of Tennessee, Knoxville (UTK); Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Wagner, John C [ORNL

An angular redistribution function for electron scattering based on Goudsmit-Saunderson theory has been implemented in a MonteCarlo electron transport code in the form of a scattering matrix that the authors term SMART (simulating many accumulative Rutherford trajectories). These matrices were originally developed for use with discrete ordinates electron transport codes. An essential characteristic of this scattering theory is a large effective mean-free-path for electrons, much larger in fact than the true single collision mean-free-path. When this theory is applied to single collision analog MonteCarlo calculations, excellent results are obtained for the principal quantities of interest, transmission and reflection spectra, and energy deposition. A derivation of the SMART scattering matrix is presented, using the method of weighted residuals to obtain the discretized form of the Spencer-Lewis equation for electron transport. Results of MonteCarlo calculations for electron transport in aluminum slabs for both beam source and isotropic source configurations are given. These results are compared with similar benchmark calculations made with the TIGER code series.

The cycle-to-cycle correlation (autocorrelation) in MonteCarlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the MonteCarlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by MonteCarlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.

Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E. [Los Alamos National Laboratory (United States)

Our recent understanding of the entanglement properties of ground states of many-body quantum systems has led to the development of a variety of variational wavefunctions based on tensor networks. The so-called bond dimension of the tensor network, ?, sets both the limit to the amount of entanglement allowed in the ansatz as well as the required computational power to contract the tensor network. Because the cost scales very strongly with ? in higher dimensions, these approaches are currently challenging in 2D and currently unusable in 3D. We present our efforts in combining MonteCarlo techniques with tensor networks to ease the computational bottleneck. Classical MonteCarlo sampling can be used to estimate the contracted value of the network, allowing one to sample expectation values and vary parameters to optimize ground states. In particular, we show a perfect sampling scheme can be efficient for tensor networks which are also unitary quantum circuits. We apply this to the Multi-scale Entanglement Renormalization Ansatz (MERA) in 1D, formally reducing the cost from O(9?) to O(5?) per sample, and demonstrate that we can optimize wavefunctions. We expect the advantages from MonteCarlo sampling will be stronger in 2D and 3D systems.

In reflectance pulse oximetry the ratio R/IR between the red and infrared intensity fluctuations, as measured at the skin surface, is used to estimate the arterial oxygen saturation. This ratio is influenced by light propagation in tissue, as measurements at several distances between light sources and detectors simultaneously show that R/IR depends on this distance. In the present study the influence of the estimated tissue properties on R/IR and its distance dependence are investigated by means of MonteCarlo simulations, a method to vary the optical properties without the need for a new MonteCarlo simulation. A three wavelength model has been introduced, because of secondary emission of the red LED. The influence of water absorption has been taken into account. The simulation results depend on the chosen optical properties. Results of R/IR for SaO2 equals 98% with properties from in vivo experiments agree much better with the measured values than the predictions based on in vitro data available in literature. The results show that the condensed MonteCarlo simulation is a valuable tool to gain insight in the principles of reflectance pulse oximetry: The model, assuming a homogeneous distribution of pulsations, is able to describe the experimental results for pulse sizes, R/IR and its distance dependence very well.

Graaff, Reindert; Dassel, A. C.; Koelink, M. H.; Aarnoudse, Jan G.; de Mul, Frits F.; Zijlstra, W. G.; Greve, Jan

MonteCarlo methods are designed to study various deterministic problems using probabilistic approaches, and with computer simulations to explore much wider possibilities for the different algorithms. Pseudo- Random Number Generators (PRNGs) are based on linear congruences of some large prime numbers, while Quasi-Random Number Generators (QRNGs) provide low discrepancy sequences, both of which giving uniformly distributed numbers in (0,1). Chaotic Random Number Generators (CRNGs) give sequences of 'random numbers' satisfying some prescribed probabilistic density, often denser around the two corners of interval (0,1), but transforming this type of density to a uniform one is usually possible. Markov Chain MonteCarlo (MCMC), as indicated by its name, is associated with Markov Chain simulations. Basic descriptions of these random number generators will be given, and a comparative analysis of these four methods will be included based on their efficiencies and other characteristics. Some applications in geoscience using MonteCarlo simulations will be described, and a comparison of these algorithms will also be included with some concluding remarks.

New directions for the quantum MonteCarlo (QMC) electronic structure method are discussed. Diffusion MonteCarlo (DMC) results for the atomization energy and heats for formation of CO+2 are presented, while the bonding character is examined using the electron localization function. DMC all-electron and effective-core potential trial functions are used to obtain the atomization energies, heats of formation, and energy differences of the C2H4 singlet and triplet states. In addition, DMC is applied to obtain the heat of reaction and barrier height of the proton extraction reaction, CH3OH + Cl ? CH 2OH + HCl. The results of the barrier height and heat of reaction are verified by examining the atomization energies and heats for formation of the reactants and products. DMC calculations were carried out on 22 small hydrocarbons. In this benchmark study the DMC atomization and bond dissociation energies, and heats of formation of these hydrocarbons are presented and compared to other ab initio methods. Methods for geometry optimization and calculating forces for QMC are discussed. The response surface methodology is applied to variational MonteCarlo (VMC) and DMC methods to obtain an optimized geometry, force constants and vibrational frequencies of CH2O. Finally, the zero-variance principle is applied to obtain VMC and DMC effective-core potential force estimators. These estimators are used to obtain a force curve for LiH.

An introduction to the use of the mathematical technique of MonteCarlo simulations to evaluate least squares regression calibration is described. MonteCarlo techniques involve the repeated sampling of data from a population that may be derived from real (experimental) data, but is more conveniently generated by a computer using a model of the analytical system and a randomization process to produce a large database. Datasets are selected from this population and fed into the calibration algorithms under test, thus providing a facile way of producing a sufficiently large number of assessments of the algorithm to enable a statically valid appraisal of the calibration process to be made. This communication provides a description of the technique that forms the basis of the results presented in Parts II and III of this series, which follow in this issue, and also highlights the issues arising from the use of small data populations in bioanalysis. PMID:23590476

One of the tasks in commissioning an electron accelerator in cancer clinics is to measure relative output factors (ROFs) versus various parameters such as applicator size (called applicator factors), cutout size (cutout factors) and air-gap size (gap factors) for various electron beam energies and applicator sizes. This kind of measurement takes a lot of time and labour. This thesis shows that MonteCarlo simulation offers an alternative to this task. With BEAM (Med. Phys. 22(1995)503-524), an EGS4 user- code, clinical accelerator electron beams are simulated and ROFs for a Siemens MD2 linear accelerator and a Varian Clinac 2100C accelerator are calculate The study shows that the MonteCarlo method is not only practical in clinics but also powerful in analyzing the related physics. The calculated ROFs agree within 1% with the measurements for most cases and 2% for all cases that have been studied, which is more than acceptable in clinical practice. The details of each component of the dose, such as dose from particles scattered off the photon-jaws and the applicator, the dose from contaminant photon, the dose from direct electrons, etc., are also analyzed. The study also explains quantitatively why the effective SSD (Source to Phantom Surface Distance) is often not the nominal reference SSD. For ROF measurements for small fields using an ion chamber, this study discusses the stopping- power ratio corrections due to changes in the depth of dose maximum as a function of field size and versus various accelerators. Since it handles ROF calculations for arbitrary fields, including square, rectangular, circular and irregular fields, in the same way,