Performance of the MTR core with MOX fuel using the MCNP4C2 code.
Shaaban, Ismail; Albarhoum, Mohamad
2016-08-01
The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively.
NASA Astrophysics Data System (ADS)
Pauzi, A. M.
2013-06-01
The neutron transport code, Monte Carlo N-Particle (MCNP) which was wellkown as the gold standard in predicting nuclear reaction was used to model the small nuclear reactor core called "U-batteryTM", which was develop by the University of Manchester and Delft Institute of Technology. The paper introduces on the concept of modeling the small reactor core, a high temperature reactor (HTR) type with small coated TRISO fuel particle in graphite matrix using the MCNPv4C software. The criticality of the core were calculated using the software and analysed by changing key parameters such coolant type, fuel type and enrichment levels, cladding materials, and control rod type. The criticality results from the simulation were validated using the SCALE 5.1 software by [1] M Ding and J L Kloosterman, 2010. The data produced from these analyses would be used as part of the process of proposing initial core layout and a provisional list of materials for newly design reactor core. In the future, the criticality study would be continued with different core configurations and geometries.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Semistochastic Projector Monte Carlo Method
NASA Astrophysics Data System (ADS)
Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.
2012-12-01
We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Adiabatic optimization versus diffusion Monte Carlo methods
NASA Astrophysics Data System (ADS)
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Monte Carlo methods in genetic analysis
Lin, Shili
1996-12-31
Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.
Self-learning Monte Carlo method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
2017-01-01
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; ...
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; ...
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
NASA Astrophysics Data System (ADS)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-07-01
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
The Monte Carlo Method. Popular Lectures in Mathematics.
ERIC Educational Resources Information Center
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
An assessment of the MCNP4C weight window
Christopher N. Culbertson; John S. Hendricks
1999-12-01
A new, enhanced weight window generator suite has been developed for MCNP version 4C. The new generator correctly estimates importances in either a user-specified, geometry-independent, orthogonal grid or in MCNP geometric cells. The geometry-independent option alleviates the need to subdivide the MCNP cell geometry for variance reduction purposes. In addition, the new suite corrects several pathologies in the existing MCNP weight window generator. The new generator is applied in a set of five variance reduction problems. The improved generator is compared with the weight window generator applied in MCNP4B. The benefits of the new methodology are highlighted, along with a description of its limitations. The authors also provide recommendations for utilization of the weight window generator.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Monte Carlo methods and applications in nuclear physics
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Perturbation Monte Carlo methods for tissue structure alterations.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome
2013-01-01
This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.
Improved Collision Modeling for Direct Simulation Monte Carlo Methods
2011-03-01
number is a measure of the rarefaction of a gas , and will be explained more thoroughly in the following chap- ter. Continuum solvers that use Navier...Limits on Mathematical Models [4] Kn=0.1, and the flow can be considered rarefied above that value. Direct Simulation Monte Carlo (DSMC) is a stochastic...method which utilizes the Monte Carlo statistical model to simulate gas behavior, which is very useful for these rarefied atmosphere hypersonic
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Successful combination of the stochastic linearization and Monte Carlo methods
NASA Technical Reports Server (NTRS)
Elishakoff, I.; Colombi, P.
1993-01-01
A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.
Deterministic and Monte Carlo Neutron Transport Calculations of the Dounreay Fast Breeder Reactor
Ziver, A. Kemal; Shahdatullah, Sabu; Eaton, Matthew D.; Oliviera, Cassiano R.E. de; Ackroyd, Ron T.; Umpleby, Adrian P.; Pain, Christopher C.; Goddard, Antony J. H.; Fitzpatrick, James
2004-12-15
A homogenized whole-reactor cylindrical model of the Dounreay Fast Reactor has been constructed using both deterministic and Monte Carlo codes to determine neutron flux distributions inside the core and at various out-of-core components. The principal aim is to predict neutron-induced activation levels using both methods and make comparisons against the measured thermal reaction rates. Neutron transport calculations have been performed for a fixed source using a spatially lumped fission neutron distribution, which has been derived from measurements. The deterministic code used is based on the finite element approximation to the multigroup second-order even-parity neutron transport equation, which is implemented in the EVENT code. The Monte Carlo solutions were obtained using the MCNP4C code, in which neutron cross sections are represented in pointwise (or continuous) form. We have compared neutron spectra at various locations not only to show differences between using multigroup deterministic and continuous energy (point nuclear data) Monte Carlo methods but also to assess neutron-induced activation levels calculated using the spectra obtained from both methods. Results were also compared against experiments that were carried out to determine neutron-induced reaction rates. To determine activation levels, we employed the European Activation Code System FISPACT. We have found that the neutron spectra calculated at various in-core and out-of-core components show some differences, which mainly reflect the use of multigroup and point energy nuclear data libraries and methods employed, but these differences have not resulted in large errors on the calculated activation levels of materials that are important (such as steel components) for decommissioning studies of the reactor. The agreement of calculated reaction rates of thermal neutron detectors such as the {sup 55}Mn(n,{gamma}){sup 56}Mn against measurements was satisfactory.
Complexity of Monte Carlo and deterministic dose-calculation methods.
Börgers, C
1998-03-01
Grid-based deterministic dose-calculation methods for radiotherapy planning require the use of six-dimensional phase space grids. Because of the large number of phase space dimensions, a growing number of medical physicists appear to believe that grid-based deterministic dose-calculation methods are not competitive with Monte Carlo methods. We argue that this conclusion may be premature. Our results do suggest, however, that finite difference or finite element schemes with orders of accuracy greater than one will probably be needed if such methods are to compete well with Monte Carlo methods for dose calculations.
A Monte Carlo method for combined segregation and linkage analysis
Guo, S.W. ); Thompson, E.A. )
1992-11-01
The authors introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. 40 refs, 5 figs., 5 tabs.
Observations on variational and projector Monte Carlo methods.
Umrigar, C J
2015-10-28
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
Olsovcová, Veronika; Havelka, Miroslav
2006-01-01
The activity of radioactive pharmaceuticals administered to patients in nuclear medicine is usually determined using well-type high-pressure ionization chambers. For the Bqmeter chamber (Consortium BQM, Czech Republic) a Monte Carlo model was created using the MCNP4C2 code. Basic chamber characteristics for two sample containers of various geometry (a vial and an ampoule) were calculated and compared with measurements. As the pharmaceuticals are often measured in various syringes, the chamber response for samples in syringes was also studied.
Frequency domain optical tomography using a Monte Carlo perturbation method
NASA Astrophysics Data System (ADS)
Yamamoto, Toshihiro; Sakamoto, Hiroki
2016-04-01
A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.
The Metropolis Monte Carlo Method in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2003-11-01
A brief overview is given of some of the advances in statistical physics that have been made using the Metropolis Monte Carlo method. By complementing theory and experiment, these have increased our understanding of phase transitions and other phenomena in condensed matter systems. A brief description of a new method, commonly known as "Wang-Landau sampling," will also be presented.
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Calculating coherent pair production with Monte Carlo methods
Bottcher, C.; Strayer, M.R.
1989-01-01
We discuss calculations of the coherent electromagnetic pair production in ultra-relativistic hadron collisions. This type of production, in lowest order, is obtained from three diagrams which contain two virtual photons. We discuss simple Monte Carlo methods for evaluating these classes of diagrams without recourse to involved algebraic reduction schemes. 19 refs., 11 figs.
Monte Carlo method for magnetic impurities in metals
NASA Technical Reports Server (NTRS)
Hirsch, J. E.; Fye, R. M.
1986-01-01
The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.
An Overview of the Monte Carlo Methods, Codes, & Applications Group
Trahan, Travis John
2016-08-30
This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.
Monte Carlo methods for light propagation in biological tissues.
Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine
2015-11-01
Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements.
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
Mammography X-Ray Spectra Simulated with Monte Carlo
Vega-Carrillo, H. R.; Gonzalez, J. Ramirez; Manzanares-Acuna, E.; Hernandez-Davila, V. M.; Villasana, R. Hernandez; Mercado, G. A.
2008-08-11
Monte Carlo calculations have been carried out to obtain the x-ray spectra of various target-filter combinations for a mammography unit. Mammography is widely used to diagnose breast cancer. Further to Mo target with Mo filter combination, Rh/Rh, Mo/Rh, Mo/Al, Rh/Al, and W/Rh are also utilized. In this work Monte Carlo calculations, using MCNP 4C code, were carried out to estimate the x-ray spectra produced when a beam of 28 keV electrons did collide with Mo, Rh and W targets. Resulting x-ray spectra show characteristic x-rays and continuous bremsstrahlung. Spectra were also calculated including filters.
Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Monte Carlo Methods for Bridging the Timescale Gap
NASA Astrophysics Data System (ADS)
Wilding, Nigel; Landau, David P.
We identify the origin, and elucidate the character of the extended time-scales that plague computer simulation studies of first and second order phase transitions. A brief survey is provided of a number of new and existing techniques that attempt to circumvent these problems. Attention is then focused on two novel methods with which we have particular experience: “Wang-Landau sampling” and Phase Switch Monte Carlo. Detailed case studies are made of the application of the Wang-Landau approach to calculate the density of states of the 2D Ising model and the Edwards-Anderson spin glass. The principles and operation of Phase Switch Monte Carlo are described and its utility in tackling ‘difficult’ first order phase transitions is illustrated via a case study of hard-sphere freezing. We conclude with a brief overview of promising new methods for the improvement of deterministic, spin dynamics simulations.
Monte Carlo Methods in ICF (LIRPP Vol. 13)
NASA Astrophysics Data System (ADS)
Zimmerman, George B.
2016-10-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Fixed-node diffusion Monte Carlo method for lithium systems
NASA Astrophysics Data System (ADS)
Rasch, K. M.; Mitas, L.
2015-07-01
We study lithium systems over a range of a number of atoms, specifically atomic anion, dimer, metallic cluster, and body-centered-cubic crystal, using the fixed-node diffusion Monte Carlo method. The focus is on analysis of the fixed-node errors of each system, and for that purpose we test several orbital sets in order to provide the most accurate nodal hypersurfaces. The calculations include both core and valence electrons in order to avoid any possible impact by pseudopotentials. To quantify the fixed-node errors, we compare our results to other highly accurate calculations, and wherever available, to experimental observations. The results for these Li systems show that the fixed-node diffusion Monte Carlo method achieves accurate total energies, recovers 96 -99 % of the correlation energy, and estimates binding energies with errors bounded by 0.1 eV /at .
Cluster Monte Carlo methods for the FePt Hamiltonian
NASA Astrophysics Data System (ADS)
Lyberatos, A.; Parker, G. J.
2016-02-01
Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.
Monte Carlo Methods and Applications for the Nuclear Shell Model
Dean, D.J.; White, J.A.
1998-08-10
The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.
Distributional monte carlo methods for the boltzmann equation
NASA Astrophysics Data System (ADS)
Schrock, Christopher R.
Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Kernel density estimator methods for Monte Carlo radiation transport
NASA Astrophysics Data System (ADS)
Banerjee, Kaushik
In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that
Novel extrapolation method in the Monte Carlo shell model
Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio
2010-12-15
We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of {sup 56}Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g{sub 9/2}-shell calculation of {sup 64}Ge.
Design of composite laminates by a Monte Carlo method
NASA Astrophysics Data System (ADS)
Fang, Chin; Springer, George S.
1993-01-01
A Monte Carlo procedure was developed for optimizing symmetric fiber reinforced composite laminates such that the weight is minimum and the Tsai-Wu strength failure criterion is satisfied in each ply. The laminate may consist of several materials including an idealized core, and may be subjected to several sets of combined in-plane and bending loads. The procedure yields the number of plies, the fiber orientation, and the material of each ply and the material and thickness of the core. A user friendly computer code was written for performing the numerical calculations. Laminates optimized by the code were compared to laminates resulting from existing optimization methods. These comparisons showed that the present Monte Carlo procedure is a useful and efficient tool for the design of composite laminates.
MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS
Roth, Nathaniel; Kasen, Daniel
2015-03-15
We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.
A multi-scale Monte Carlo method for electrolytes
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xu, Zhenli; Xing, Xiangjun
2015-08-01
Artifacts arise in the simulations of electrolytes using periodic boundary conditions (PBCs). We show the origin of these artifacts are the periodic image charges and the constraint of charge neutrality inside the simulation box, both of which are unphysical from the view point of real systems. To cure these problems, we introduce a multi-scale Monte Carlo (MC) method, where ions inside a spherical cavity are simulated explicitly, while ions outside are treated implicitly using a continuum theory. Using the method of Debye charging, we explicitly derive the effective interactions between ions inside the cavity, arising due to the fluctuations of ions outside. We find that these effective interactions consist of two types: (1) a constant cavity potential due to the asymmetry of the electrolyte, and (2) a reaction potential that depends on the positions of all ions inside. Combining the grand canonical Monte Carlo (GCMC) with a recently developed fast algorithm based on image charge method, we perform a multi-scale MC simulation of symmetric electrolytes, and compare it with other simulation methods, including PBC + GCMC method, as well as large scale MC simulation. We demonstrate that our multi-scale MC method is capable of capturing the correct physics of a large system using a small scale simulation.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E; Gubernatis, James E
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
Scoring methods for implicit Monte Carlo radiation transport
Edwards, A.L.
1981-01-01
Analytical and numerical tests were made of a number of possible methods for scoring the energy exchange between radiation and matter in the implicit Monte Carlo (IMC) radiation transport scheme of Fleck and Cummings. The interactions considered were effective absorption, elastic scattering, and Compton scattering. The scoring methods tested were limited to simple combinations of analogue, linear expected value, and exponential expected value scoring. Only two scoring methods were found that produced the same results as a pure analogue method. These are a combination of exponential expected value absorption and deposition and analogue Compton scattering of the particle, with either linear expected value Compton deposition or analogue Compton deposition. In both methods, the collision distance is based on the total scattering cross section.
Analysis of real-time networks with monte carlo methods
NASA Astrophysics Data System (ADS)
Mauclair, C.; Durrieu, G.
2013-12-01
Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published
Design of an explosive detection system using Monte Carlo method.
Hernández-Adame, Pablo Luis; Medina-Castro, Diego; Rodriguez-Ibarra, Johanna Lizbeth; Salas-Luevano, Miguel Angel; Vega-Carrillo, Hector Rene
2016-11-01
Regardless the motivation terrorism is the most important risk for the national security in many countries. Attacks with explosives are the most common method used by terrorists. Therefore several procedures to detect explosives are utilized; among these methods are the use of neutrons and photons. In this study the Monte Carlo method an explosive detection system using a (241)AmBe neutron source was designed. In the design light water, paraffin, polyethylene, and graphite were used as moderators. In the work the explosive RDX was used and the induced gamma rays due to neutron capture in the explosive was estimated using NaI(Tl) and HPGe detectors. When light water is used as moderator and HPGe as the detector the system has the best performance allowing distinguishing between the explosive and urea. For the final design the Ambient dose equivalent for neutrons and photons were estimated along the radial and axial axis.
Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination
Liu, B; Xu, J; Liu, T; Ouyang, X
2012-01-01
Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s−1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s−1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds. PMID:22573293
The macro response Monte Carlo method for electron transport
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could be tested. Most
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Multi-pass Monte Carlo simulation method in nuclear transmutations.
Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M
2016-12-01
Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10(25) or 10(26) members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10(25). Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10(28) steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors.
Particle acceleration at shocks - A Monte Carlo method
NASA Technical Reports Server (NTRS)
Kirk, J. G.; Schneider, P.
1987-01-01
A Monte Carlo method is presented for the problem of acceleration of test particles at relativistic shocks. The particles are assumed to diffuse in pitch angle as a result of scattering off magnetic irregularities frozen into the fluid. Several tests are performed using the analytic results available for both relativistic and nonrelativistic shock speeds. The acceleration at relativistic shocks under the influence of radiation losses is investigated, including the effects of a momentum dependence in the diffusion coefficient. The results demonstrate the usefulness of the technique in those situations in which the diffusion approximation cannot be employed, such as when relativistic bulk motion is considered, when particles are permitted to escape at the boundaries, and when the effects of the finite length of the particle mean free path are important.
New Monte Carlo method for the self-avoiding walk
NASA Astrophysics Data System (ADS)
Berretti, Alberto; Sokal, Alan D.
1985-08-01
We introduce a new Monte Carlo algorithm for the self-avoiding walk (SAW), and show that it is particularly efficient in the critical region (long chains). We also introduce new and more efficient statistical techniques. We employ these methods to extract numerical estimates for the critical parameters of the SAW on the square lattice. We find μ=2.63820 ± 0.00004 ± 0.00030 γ=1.352 ± 0.006 ± 0.025 νv=0.7590 ± 0.0062 ± 0.0042 where the first error bar represents systematic error due to corrections to scaling (subjective 95% confidence limits) and the second bar represents statistical error (classical 95% confidence limits). These results are based on SAWs of average length ≈ 166, using 340 hours CPU time on a CDC Cyber 170-730. We compare our results to previous work and indicate some directions for future research.
Parallel Performance Optimization of the Direct Simulation Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas
2009-11-01
Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.
Cool walking: a new Markov chain Monte Carlo sampling method.
Brown, Scott; Head-Gordon, Teresa
2003-01-15
Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking (J-Walking), Smart Walking (S-Walking), Smart Darting, and Parallel Tempering. We present an alternative to these approaches that we call Cool Walking, or C-Walking. In C-Walking two Markov chains are propagated in tandem, one at a high (ergodic) temperature and the other at a low temperature. Nonlocal trial moves for the low temperature walker are generated by first sampling from the high-temperature distribution, then performing a statistical quenching process on the sampled configuration to generate a C-Walking jump move. C-Walking needs only one high-temperature walker, satisfies detailed balance, and offers the important practical advantage that the high and low-temperature walkers can be run in tandem with minimal degradation of sampling due to the presence of correlations. To make the C-Walking approach more suitable to real problems we decrease the required number of cooling steps by attempting to jump at intermediate temperatures during cooling. We further reduce the number of cooling steps by utilizing "windows" of states when jumping, which improves acceptance ratios and lowers the average number of cooling steps. We present C-Walking results with comparisons to J-Walking, S-Walking, Smart Darting, and Parallel Tempering on a one-dimensional rugged potential energy surface in which the exact normalized probability distribution is known. C-Walking shows superior sampling as judged by two ergodic measures.
Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method
NASA Astrophysics Data System (ADS)
Saini, P. Sri; Prince, Shanthi
2011-10-01
At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.
Computation of electron diode characteristics by monte carlo method including effect of collisions.
NASA Technical Reports Server (NTRS)
Goldstein, C. M.
1964-01-01
Consistent field Monte Carlo method calculation for collision effect on electron-ion diode characteristics and for hard sphere electron- neutral collision effect for monoenergetic- thermionic emission
Markov chain Monte Carlo methods: an introductory example
NASA Astrophysics Data System (ADS)
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
A numerical analysis method for evaluating rod lenses using the Monte Carlo method.
Yoshida, Shuhei; Horiuchi, Shuma; Ushiyama, Zenta; Yamamoto, Manabu
2010-12-20
We propose a numerical analysis method for evaluating GRIN lenses using the Monte Carlo method. Actual measurements of the modulation transfer function (MTF) of a GRIN lens using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. However the results differ greatly from those from experiments. We therefore developed an evaluation method similar to the experimental system based on the Monte Carlo method and verified that it more closely matches the experimental results than the conventional method.
A Monte Carlo Method for Multi-Objective Correlated Geometric Optimization
2014-05-01
PAGES 19b. TELEPHONE NUMBER (Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 May 2014 Final A Monte Carlo Method for...requiring computationally intensive algorithms for optimization. This report presents a method developed for solving such systems using a Monte Carlo...performs a Monte Carlo optimization to provide geospatial intelligence on entity placement using OpenCL framework. The solutions for optimal
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Exact special twist method for quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Dagrada, Mario; Karakuzu, Seher; Vildosola, Verónica Laura; Casula, Michele; Sorella, Sandro
2016-12-01
We present a systematic investigation of the special twist method introduced by Rajagopal et al. [Phys. Rev. B 51, 10591 (1995), 10.1103/PhysRevB.51.10591] for reducing finite-size effects in correlated calculations of periodic extended systems with Coulomb interactions and Fermi statistics. We propose a procedure for finding special twist values which, at variance with previous applications of this method, reproduce the energy of the mean-field infinite-size limit solution within an adjustable (arbitrarily small) numerical error. This choice of the special twist is shown to be the most accurate single-twist solution for curing one-body finite-size effects in correlated calculations. For these reasons we dubbed our procedure "exact special twist" (EST). EST only needs a fully converged independent-particles or mean-field calculation within the primitive cell and a simple fit to find the special twist along a specific direction in the Brillouin zone. We first assess the performances of EST in a simple correlated model such as the three-dimensional electron gas. Afterwards, we test its efficiency within ab initio quantum Monte Carlo simulations of metallic elements of increasing complexity. We show that EST displays an overall good performance in reducing finite-size errors comparable to the widely used twist average technique but at a much lower computational cost since it involves the evaluation of just one wave function. We also demonstrate that the EST method shows similar performances in the calculation of correlation functions, such as the ionic forces for structural relaxation and the pair radial distribution function in liquid hydrogen. Our conclusions point to the usefulness of EST for correlated supercell calculations; our method will be particularly relevant when the physical problem under consideration requires large periodic cells.
Investigation of radiosurgical beam profiles using Monte Carlo method
Perucha, Mara; Sa, Francisco; Leal, Antonio; Rinco, Magnolia; Arra, Rafael; Nun, Luis; Carrasco, Ester
2003-03-31
An accurate determination of the penumbra of radiosurgery profiles is critical to avoid complications in organs at risk adjacent to the tumor. Conventional detectors may not be accurate enough for small field sizes. The Monte Carlo (MC) method was used to study the behavior of radiosurgical beam profiles at the penumbral region; the BEAM code was also used in this work. Two collimators (2.2- and 0.3-cm diameter) were calculated and compared with empirical measurements obtained with the detectors normally used. The differences found between film dosimetry and MC revealed a systematic error in the reading procedure. In the process, a water phantom was simulated with a layer of the same composition as that of the film. MC calculations with film differed by a small amount from those obtained with the water phantom alone. In conclusion, MC may be used as a verification tool to support dosimetrical procedures with conventional detectors, especially in very small beams such as those used in radiosurgery. Furthermore, it has been proved that the film energy dependence is negligible for fields used in radiosurgery.
Medical Imaging Image Quality Assessment with Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
2012-08-01
AFRL-RX-WP-TP-2012-0397 INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD ...SUBTITLE INVERSE PROBLEM FOR ELECTROMAGNETIC PROPAGATION IN A DIELECTRIC MEDIUM USING MARKOV CHAIN MONTE CARLO METHOD (PREPRINT) 5a. CONTRACT...a stochastic inverse methodology arising in electromagnetic imaging. Nondestructive testing using guided microwaves covers a wide range of
Latent uncertainties of the precalculated track Monte Carlo method
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
2015-01-15
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up
BACKWARD AND FORWARD MONTE CARLO METHOD IN POLARIZED RADIATIVE TRANSFER
Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu
2016-03-20
In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.
Kinetic Monte Carlo method applied to nucleic acid hairpin folding.
Sauerwine, Ben; Widom, Michael
2011-12-01
Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.
Uncertainty analysis for fluorescence tomography with Monte Carlo method
NASA Astrophysics Data System (ADS)
Reinbacher-Köstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann
2011-07-01
Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.
NASA Astrophysics Data System (ADS)
Naraghi, M. H. N.; Chung, B. T. F.
1982-06-01
A multiple step fixed random walk Monte Carlo method for solving heat conduction in solids with distributed internal heat sources is developed. In this method, the probability that a walker reaches a point a few steps away is calculated analytically and is stored in the computer. Instead of moving to the immediate neighboring point the walker is allowed to jump several steps further. The present multiple step random walk technique can be applied to both conventional Monte Carlo and the Exodus methods. Numerical results indicate that the present method compares well with finite difference solutions while the computation speed is much faster than that of single step Exodus and conventional Monte Carlo methods.
Bolewski, A; Ciechanowski, M; Dydejczyk, A; Kreft, A
2008-04-01
The effect of the detector characteristics on the performance of an isotopic neutron source device for measuring thermal neutron absorption cross section (Sigma) has been examined by means of Monte Carlo simulations. Three specific experimental arrangements, alternately with BF(3) counters and (3)He counters of the same sizes, have been modelled using the MCNP-4C code. Results of Monte Carlo calculations show that devices with BF(3) counters are more sensitive to Sigma, but high-pressure (3)He counters offer faster assays.
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport
of viruses in the unsaturated zone. A database of input parameters
allowed Monte Carlo analysis with the model. The resulting kernel
densities of predicted attenuation during percolation indicated very ...
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming
2014-12-29
The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.
Diamantis, Nikolaos G; Manousakis, Efstratios
2013-10-01
The diagrammatic Monte Carlo (DiagMC) method is a numerical technique which samples the entire diagrammatic series of the Green's function in quantum many-body systems. In this work, we incorporate the flat histogram principle in the diagrammatic Monte Carlo method, and we term the improved version the "flat histogram diagrammatic Monte Carlo" method. We demonstrate the superiority of this method over the standard DiagMC in extracting the long-imaginary-time behavior of the Green's function, without incorporating any a priori knowledge about this function, by applying the technique to the polaron problem.
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Kasesaz, Y; Khalafi, H; Rahmani, F
2013-12-01
Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method.
Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations
Edward W. Larson
2004-02-17
The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.
NASA Technical Reports Server (NTRS)
Firstenberg, H.
1971-01-01
The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.
Markov chain Monte Carlo method without detailed balance.
Suwa, Hidemaro; Todo, Synge
2010-09-17
We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D.
2012-07-01
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy
Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W
2002-02-19
Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.
Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.
Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
Bold diagrammatic Monte Carlo method applied to fermionized frustrated spins.
Kulagin, S A; Prokof'ev, N; Starykh, O A; Svistunov, B; Varney, C N
2013-02-15
We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing--cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.
Bakshi, A K; Chatterjee, S; Palani Selvam, T; Dhabekar, B S
2010-07-01
In the present study, the energy dependence of response of some popular thermoluminescent dosemeters (TLDs) have been investigated such as LiF:Mg,Ti, LiF:Mg,Cu,P and CaSO(4):Dy to synchrotron radiation in the energy range of 10-34 keV. The study utilised experimental, Monte Carlo and analytical methods. The Monte Carlo calculations were based on the EGSnrc and FLUKA codes. The calculated energy response of all the TLDs using the EGSnrc and FLUKA codes shows excellent agreement with each other. The analytically calculated response shows good agreement with the Monte Carlo calculated response in the low-energy region. In the case of CaSO(4):Dy, the Monte Carlo-calculated energy response is smaller by a factor of 3 at all energies in comparison with the experimental response when polytetrafluoroethylene (PTFE) (75 % by wt) is included in the Monte Carlo calculations. When PTFE is ignored in the Monte Carlo calculations, the difference between the calculated and experimental response decreases (both responses are comparable >25 keV). For the LiF-based TLDs, the Monte Carlo-based response shows reasonable agreement with the experimental response.
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2009-01-01
We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
Markov chain Monte Carlo methods for state-space models with point process observations.
Yuan, Ke; Girolami, Mark; Niranjan, Mahesan
2012-06-01
This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.
Equivalence of four Monte Carlo methods for photon migration in turbid media.
Sassaroli, Angelo; Martelli, Fabrizio
2012-10-01
In the field of photon migration in turbid media, different Monte Carlo methods are usually employed to solve the radiative transfer equation. We consider four different Monte Carlo methods, widely used in the field of tissue optics, that are based on four different ways to build photons' trajectories. We provide both theoretical arguments and numerical results showing the statistical equivalence of the four methods. In the numerical results we compare the temporal point spread functions calculated by the four methods for a wide range of the optical properties in the slab and semi-infinite medium geometry. The convergence of the methods is also briefly discussed.
NASA Astrophysics Data System (ADS)
Li, Shengfu; Chen, Guanghua; Wang, Rongbo; Luo, Zhengxiong; Peng, Qixian
2016-12-01
This paper proposes a Monte Carlo (MC) based angular distribution estimation method of multiply scattered photons for underwater imaging. This method targets on turbid waters. Our method is based on applying typical Monte Carlo ideas to the present problem by combining all the points on a spherical surface. The proposed method is validated with the numerical solution of the radiative transfer equation (RTE). The simulation results based on typical optical parameters of turbid waters show that the proposed method is effective in terms of computational speed and sensitivity.
NASA Astrophysics Data System (ADS)
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and
Application of advanced Monte Carlo Methods in numerical dosimetry.
Reichelt, U; Henniger, J; Lange, C
2006-01-01
Many tasks in different sectors of dosimetry are very complex and highly sensitive to changes in the radiation field. Often, only the simulation of radiation transport is capable of describing the radiation field completely. Down to sub-cellular dimensions the energy deposition by cascades of secondary electrons is the main pathway for damage induction in matter. A large number of interactions take place until such electrons are slowed down to thermal energies. Also for some problems of photon transport a large number of photon histories need to be processed. Thus the efficient non-analogue Monte Carlo program, AMOS, has been developed for photon and electron transport. Various applications and benchmarks are presented showing its ability. For radiotherapy purposes the radiation field of a brachytherapy source is calculated according to the American Association of Physicists in Medicine Task Group Report 43 (AAPM/TG43). As additional examples, results for the detector efficiency of a high-purity germanium (HPGe) detector and a dose estimation for an X-ray shielding for radiation protection are shown.
NASA Astrophysics Data System (ADS)
Diamantis, Nikolaos G.; Manousakis, Efstratios
2013-10-01
The diagrammatic Monte Carlo (DiagMC) method is a numerical technique which samples the entire diagrammatic series of the Green's function in quantum many-body systems. In this work, we incorporate the flat histogram principle in the diagrammatic Monte Carlo method, and we term the improved version the “flat histogram diagrammatic Monte Carlo” method. We demonstrate the superiority of this method over the standard DiagMC in extracting the long-imaginary-time behavior of the Green's function, without incorporating any a priori knowledge about this function, by applying the technique to the polaron problem.
A New Method for the Calculation of Diffusion Coefficients with Monte Carlo
NASA Astrophysics Data System (ADS)
Dorval, Eric
2014-06-01
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.
NASA Astrophysics Data System (ADS)
Lin, Lin; Zhang, Mei
2015-02-01
The scaling Monte Carlo method and Gaussian model are applied to simulate the transportation of light beam with arbitrary waist radius. Much of the time, Monte Carlo simulation is performed for pencil or cone beam where the initial status of the photon is identical. In practical application, incident light is always focused on the sample to form approximate Gauss distribution on the surface. With alteration of focus position in the sample, the initial status of the photon will not be identical any more. Using the hyperboloid method, the initial reflect angle and coordinates are generated statistically according to the size of Gaussian waist and focus depth. Scaling calculation is performed with baseline data from standard Monte Carlo simulation. The scaling method incorporated with the Gaussian model was tested, and proved effective over a range of scattering coefficients from 20% to 180% relative to the value used in baseline simulation. In most cases, percentage error was less than 10%. The increasing of focus depth will result in larger error of scaled radial reflectance in the region close to the optical axis. In addition to evaluating accuracy of scaling the Monte Carlo method, this study has given implications for inverse Monte Carlo with arbitrary parameters of optical system.
A Monte Carlo method for the PDF (Probability Density Functions) equations of turbulent flow
NASA Astrophysics Data System (ADS)
Pope, S. B.
1980-02-01
The transport equations of joint probability density functions (pdfs) in turbulent flows are simulated using a Monte Carlo method because finite difference solutions of the equations are impracticable, mainly due to the large dimensionality of the pdfs. Attention is focused on equation for the joint pdf of chemical and thermodynamic properties in turbulent reactive flows. It is shown that the Monte Carlo method provides a true simulation of this equation and that the amount of computation required increases only linearly with the number of properties considered. Consequently, the method can be used to solve the pdf equation for turbulent flows involving many chemical species and complex reaction kinetics. Monte Carlo calculations of the pdf of temperature in a turbulent mixing layer are reported. These calculations are in good agreement with the measurements of Batt (1977).
Sampling uncertainty evaluation for data acquisition board based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ge, Leyi; Wang, Zhongyu
2008-10-01
Evaluating the data acquisition board sampling uncertainty is a difficult problem in the field of signal sampling. This paper analyzes the sources of dada acquisition board sampling uncertainty in the first, then introduces a simulation theory of dada acquisition board sampling uncertainty evaluation based on Monte Carlo method and puts forward a relation model of sampling uncertainty results, sampling numbers and simulation times. In the case of different sample numbers and different signal scopes, the author establishes a random sampling uncertainty evaluation program of a PCI-6024E data acquisition board to execute the simulation. The results of the proposed Monte Carlo simulation method are in a good agreement with the GUM ones, and the validities of Monte Carlo method are represented.
A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha
ERIC Educational Resources Information Center
Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.
2010-01-01
The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…
NASA Astrophysics Data System (ADS)
Socha, John Bronn
The first part of this thesis contains a historical perspective on the last five years of research in hot-electron transport in semiconductors. This perspective serves two purposes. First, it provides a motivation for the second part of this thesis, which deals with calculating the full velocity distribution function of hot electrons. And second, it points out many of the unsolved theoretical problems that might be solved with the techniques developed in the second part. The second part of this thesis contains a derivation of a new method for calculating velocity distribution functions. This method, the Monte Carlo trajectory integral, is well suited for calculating the time evolution of a distribution function in the presence of complicated scattering mechanisms, like scattering with acoustic and optical phonons, inter-valley scattering, Bragg reflections, and even electron-electron scattering. This method uses many of the techniques develped for Monte Carlo transport calculations, but unlike other Monte Carlo methods, the Monte Carlo trajectory integral has very good control over the variance of the calculated distribution function across the entire distribution function. Since the Monte Carlo trajectory integral only needs information on the distribution function at previous times, it is well suited to electron-electron scattering where the distribution function must be known before the scattering rate can be calculated. Finally, this thesis ends with an application of the Monte Carlo trajectory integral to electron transport in SiO(,2) in the presence of electric fields up to 12 MV/cm, and it includes a number of suggestions for applying the Monte Carlo trajectory integral to other experiments in both SiO(,2) and GaAs. The Monte Carlo trajectory integral should be of special interest when super-computers are more common since then there will be the computing resources to include electron-electron scattering. The high-field distribution functions calculated when
Energy-Driven Kinetic Monte Carlo Method and Its Application in Fullerene Coalescence.
Ding, Feng; Yakobson, Boris I
2014-09-04
Mimicking the conventional barrier-based kinetic Monte Carlo simulation, an energy-driven kinetic Monte Carlo (EDKMC) method was developed to study the structural transformation of carbon nanomaterials. The new method is many orders magnitude faster than standard molecular dynamics or Monte Marlo (MC) simulations and thus allows us to explore rare events within a reasonable computational time. As an example, the temperature dependence of fullerene coalescence was studied. The simulation, for the first time, revealed that short capped single-walled carbon nanotubes (SWNTs) appear as low-energy metastable structures during the structural evolution.
Zehtabian, M; Zaker, N; Sina, S; Meigooni, A Soleimani
2015-06-15
Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.
Monte Carlo Method for Solving the Fredholm Integral Equations of the Second Kind
NASA Astrophysics Data System (ADS)
ZhiMin, Hong; ZaiZai, Yan; JianRui, Chen
2012-12-01
This article is concerned with a numerical algorithm for solving approximate solutions of Fredholm integral equations of the second kind with random sampling. We use Simpson's rule for solving integral equations, which yields a linear system. The Monte Carlo method, based on the simulation of a finite discrete Markov chain, is employed to solve this linear system. To show the efficiency of the method, we use numerical examples. Results obtained by the present method indicate that the method is an effective alternate method.
Perfetti, Christopher M; Martin, William R; Rearden, Bradley T; Williams, Mark L
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
NASA Astrophysics Data System (ADS)
Newell, Quentin Thomas
The Monte Carlo method provides powerful geometric modeling capabilities for large problem domains in 3-D; therefore, the Monte Carlo method is becoming popular for 3-D fuel depletion analyses to compute quantities of interest in spent nuclear fuel including isotopic compositions. The Monte Carlo approach has not been fully embraced due to unresolved issues concerning the effect of Monte Carlo uncertainties on the predicted results. Use of the Monte Carlo method to solve the neutron transport equation introduces stochastic uncertainty in the computed fluxes. These fluxes are used to collapse cross sections, estimate power distributions, and deplete the fuel within depletion calculations; therefore, the predicted number densities contain random uncertainties from the Monte Carlo solution. These uncertainties can be compounded in time because of the extrapolative nature of depletion and decay calculations. The objective of this research was to quantify the stochastic uncertainty propagation of the flux uncertainty, introduced by the Monte Carlo method, to the number densities for the different isotopes in spent nuclear fuel due to multiple depletion time steps. The research derived a formula that calculates the standard deviation in the nuclide number densities based on propagating the statistical uncertainty introduced when using coupled Monte Carlo depletion computer codes. The research was developed with the use of the TRITON/KENO sequence of the SCALE computer code. The linear uncertainty nuclide group approximation (LUNGA) method developed in this research approximated the variance of ψN term, which is the variance in the flux shape due to uncertainty in the calculated nuclide number densities. Three different example problems were used in this research to calculate of the standard deviation in the nuclide number densities using the LUNGA method. The example problems showed that the LUNGA method is capable of calculating the standard deviation of the nuclide
Quantum-trajectory Monte Carlo method for study of electron-crystal interaction in STEM.
Ruan, Z; Zeng, R G; Ming, Y; Zhang, M; Da, B; Mao, S F; Ding, Z J
2015-07-21
In this paper, a novel quantum-trajectory Monte Carlo simulation method is developed to study electron beam interaction with a crystalline solid for application to electron microscopy and spectroscopy. The method combines the Bohmian quantum trajectory method, which treats electron elastic scattering and diffraction in a crystal, with a Monte Carlo sampling of electron inelastic scattering events along quantum trajectory paths. We study in this work the electron scattering and secondary electron generation process in crystals for a focused incident electron beam, leading to understanding of the imaging mechanism behind the atomic resolution secondary electron image that has been recently achieved in experiment with a scanning transmission electron microscope. According to this method, the Bohmian quantum trajectories have been calculated at first through a wave function obtained via a numerical solution of the time-dependent Schrödinger equation with a multislice method. The impact parameter-dependent inner-shell excitation cross section then enables the Monte Carlo sampling of ionization events produced by incident electron trajectories travelling along atom columns for excitation of high energy knock-on secondary electrons. Following cascade production, transportation and emission processes of true secondary electrons of very low energies are traced by a conventional Monte Carlo simulation method to present image signals. Comparison of the simulated image for a Si(110) crystal with the experimental image indicates that the dominant mechanism of atomic resolution of secondary electron image is the inner-shell ionization events generated by a high-energy electron beam.
Direct Simulation Monte Carlo: Theory, Methods, and Open Challenges
2011-01-01
SUPPLEMENTARY NOTES See also ADA579248. Models and Computational Methods for Rarefied Flows (Modeles et methodes de calcul des coulements de gaz rarefies ). RTO...either flow properties (e.g., temperature field in the flow ) or surface quantities (e.g., drag and lift for a vehicle) and these are measured by... flows , on the first time step one should use ½ Δt (Strang splitting) to maintain temporal accuracy. If measuring non-conserved variables (e.g
Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources
NASA Astrophysics Data System (ADS)
Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi
2017-01-01
Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
ERIC Educational Resources Information Center
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C
2010-12-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.
The Monte Carlo Method and the Evaluation of Retrieval System Performance.
ERIC Educational Resources Information Center
Burgin, Robert
1999-01-01
Introduces the Monte Carlo method which is shown to represent an attractive alternative to the hypergeometric model for identifying the levels at which random retrieval performance is exceeded in retrieval test collections and for overcoming some of the limitations of the hypergeometric model. Practical matters to consider when employing the Monte…
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
ERIC Educational Resources Information Center
Dumenci, Levent; Windle, Michael
2001-01-01
Used Monte Carlo methods to evaluate the adequacy of cluster analysis to recover group membership based on simulated latent growth curve (LCG) models. Cluster analysis failed to recover growth subtypes adequately when the difference between growth curves was shape only. Discusses circumstances under which it was more successful. (SLD)
Distributional Monte Carlo Methods for the Boltzmann Equation
2013-03-01
Examples of such violations arise in rarefied gas dynamics, hypersonic flows , and micro-scale flows . Additionally, there is an “equilibrium hypothesis...are rarefied flows and flows containing non-equilibrium phenomena. Applications of rarefied gas dynamics typically involve high-altitude flight and...1 1.1 Kinetic Theory and Rarefied Gas Dynamics . . . . . . . . . . . . . . . . . 3 1.2 Computational Methods for the Boltzmann equation
Monte-Carlo methods make Dempster-Shafer formalism feasible
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Bernat, Andrew; Borrett, Walter; Mariscal, Yvonne; Villa, Elsa
1991-01-01
One of the main obstacles to the applications of Dempster-Shafer formalism is its computational complexity. If we combine m different pieces of knowledge, then in general case we have to perform up to 2(sup m) computational steps, which for large m is infeasible. For several important cases algorithms with smaller running time were proposed. We prove, however, that if we want to compute the belief bel(Q) in any given query Q, then exponential time is inevitable. It is still inevitable, if we want to compute bel(Q) with given precision epsilon. This restriction corresponds to the natural idea that since initial masses are known only approximately, there is no sense in trying to compute bel(Q) precisely. A further idea is that there is always some doubt in the whole knowledge, so there is always a probability p(sub o) that the expert's knowledge is wrong. In view of that it is sufficient to have an algorithm that gives a correct answer a probability greater than 1-p(sub o). If we use the original Dempster's combination rule, this possibility diminishes the running time, but still leaves the problem infeasible in the general case. We show that for the alternative combination rules proposed by Smets and Yager feasible methods exist. We also show how these methods can be parallelized, and what parallelization model fits this problem best.
Fast sampling in the slow manifold: The momentum-enhanced hybrid Monte Carlo method
NASA Astrophysics Data System (ADS)
Andricioaei, Ioan
2005-03-01
We will present a novel dynamic algorithm, the MEHMC method, which enhances sampling and at the same time yielding correct Boltzmann weighted statistical distributions. The gist of the MEHMC method is to use momentum averaging to identify the slow manifold and bias along this manifold the Maxwell distribution of momenta usually employed in Hybrid Monte Carlo. Several tests and applications are to exemplify the method.
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition
Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.
2008-05-01
A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient.
An Iterative Monte Carlo Method for Nonconjugate Bayesian Analysis
1992-09-07
and age (X) for 27 captured samples of the sirenian species dugong (more commonly known as the sea cow). To implement our method we first transform...Biostatistics. -21- Table 1. Length (Y) Versus Age (X) for the Sirenian Species Dugong X 1 1.5 1.5 1.5 2.5 4.0 5.0 5.0 7.0 Y 1.80 1.85 1.87 1.77 2.02 2.27 2.15...Exposed 1.6907 6 59 1.7242 13 60 1.7552 18 62 1.7842 28 56 1.8113 52 63 1.8369 53 59 1.8610 61 62 1.8839 60 60 Figure 1. Estimated posteriors, dugong
NASA Astrophysics Data System (ADS)
Kim, Minho; Lee, Hyounggun; Kim, Hyosim; Park, Hongmin; Lee, Wonho; Park, Sungho
2014-03-01
This study evaluated the Monte Carlo method for determining the dose calculation in fluoroscopy by using a realistic human phantom. The dose was calculated by using Monte Carlo N-particle extended (MCNPX) in simulations and was measured by using Korean Typical Man-2 (KTMAN-2) phantom in the experiments. MCNPX is a widely-used simulation tool based on the Monte-Carlo method and uses random sampling. KTMAN-2 is a virtual phantom written in MCNPX language and is based on the typical Korean man. This study was divided into two parts: simulations and experiments. In the former, the spectrum generation program (SRS-78) was used to obtain the output energy spectrum for fluoroscopy; then, each dose to the target organ was calculated using KTMAN-2 with MCNPX. In the latter part, the output of the fluoroscope was calibrated first and TLDs (Thermoluminescent dosimeter) were inserted in the ART (Alderson Radiation Therapy) phantom at the same places as in the simulation. Thus, the phantom was exposed to radiation, and the simulated and the experimental doses were compared. In order to change the simulation unit to the dose unit, we set the normalization factor (NF) for unit conversion. Comparing the simulated with the experimental results, we found most of the values to be similar, which proved the effectiveness of the Monte Carlo method in fluoroscopic dose evaluation. The equipment used in this study included a TLD, a TLD reader, an ART phantom, an ionization chamber and a fluoroscope.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
NASA Astrophysics Data System (ADS)
Shariatinasab, Reza; Tadayon, Pooya; Ametani, Akihiro
2016-07-01
This paper proposes a hybrid method for calculating lightning performance of overhead lines caused by direct strokes by combining Lattice diagram together with the Monte Carlo method. In order to go through this, firstly, the proper analytical relations for overvoltages calculation are established based on Lattice diagram. Then, the Monte Carlo procedure is applied to the obtained analytical relations. The aim of the presented method that will be called `ML method' is simply estimation of the lightning performance of the overhead lines and performing the risk analysis of power apparatus with retaining the acceptable accuracy. To confirm the accuracy, the calculated results of the presented ML method are compared with those calculated by the EMTP/ATP simulation.
Kharrati, Hedi; Agrebi, Amel; Karaoui, Mohamed-Karim
2007-04-01
X-ray buildup factors of lead in broad beam geometry for energies from 15 to 150 keV are determined using the general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C). The obtained buildup factors data are fitted to a modified three parameter Archer et al. model for ease in calculating the broad beam transmission with computer at any tube potentials/ filters combinations in diagnostic energies range. An example for their use to compute the broad beam transmission at 70, 100, 120, and 140 kVp is given. The calculated broad beam transmission is compared to data derived from literature, presenting good agreement. Therefore, the combination of the buildup factors data as determined and a mathematical model to generate x-ray spectra provide a computationally based solution to broad beam transmission for lead barriers in shielding x-ray facilities.
In silico prediction of the β-cyclodextrin complexation based on Monte Carlo method.
Veselinović, Aleksandar M; Veselinović, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikolić, Goran M
2015-11-10
In this study QSPR models were developed to predict the complexation of structurally diverse compounds with β-cyclodextrin based on SMILES notation optimal descriptors using Monte Carlo method. The predictive potential of the applied approach was tested with three random splits into the sub-training, calibration, test and validation sets and with different statistical methods. Obtained results demonstrate that Monte Carlo method based modeling is a very promising computational method in the QSPR studies for predicting the complexation of structurally diverse compounds with β-cyclodextrin. The SMILES attributes (structural features both local and global), defined as molecular fragments, which are promoters of the increase/decrease of molecular binding constants were identified. These structural features were correlated to the complexation process and their identification helped to improve the understanding for the complexation mechanisms of the host molecules.
Searching therapeutic agents for treatment of Alzheimer disease using the Monte Carlo method.
Toropova, Mariya A; Toropov, Andrey A; Raška, Ivan; Rašková, Mária
2015-09-01
Quantitative structure - activity relationships (QSARs) for the pIC50 (binding affinity) of gamma-secretase inhibitors can be constructed with the Monte Carlo method using CORAL software (http://www.insilico.eu/coral). The considerable influence of the presence of rings of various types with respect to the above endpoint has been detected. The mechanistic interpretation and the domain of applicability of the QSARs are discussed. Methods to select new potential gamma-secretase inhibitors are suggested.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome
2016-05-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides.
Nguyen, Jennifer; Hayakawa, Carole K.; Mourant, Judith R.; Venugopalan, Vasan; Spanier, Jerome
2016-01-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642
Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle
NASA Astrophysics Data System (ADS)
Arai, R.; Tamura, R.; Fukuda, H.; Li, J.; Saito, A. T.; Kaji, S.; Nakagome, H.; Numazawa, T.
2015-12-01
In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method.
Quantum Diffusion Monte Carlo Method for strong field time dependent problems
NASA Astrophysics Data System (ADS)
Kalinski, Matt
2006-05-01
We formulate the Quantum Diffusion Quantum Monte Carlo (QDMC) method for the solution of the time-dependent Schr"odinger equation for atoms in strong laser fields. Unlike for the normal diffusion Monte Carlo the wave function is represented by walkers with two kinds or colors which solve two coupled and nonlinear diffusion equations. Those diffusion equations are coupled by the potentials similar to those introduced by Shay which must be added to Schr"odingers equation to obtain classical dynamics equivalent to the quantum mechanics [1]. The potentials are calculated semi-analytically similarly to smoothing methods of smooth particle electrodynamics (SPD) with Gaussian smoothing kernels. We apply this method to strong field two electron ionization of Helium. We calculate two electron double ionization rate in full six-dimensional configuration space quantum mechanically. Comparison with classical mechanics and the low dimensional grid models is also provided. 1cm [1] D. Shay, Phys. Rev A 13, 2261 (1976)
Comparison of Monte Carlo methods for criticality benchmarks: Pointwise compared to multigroup
Choi, J.S.; Alesso, P.H.; Pearson, J.S. )
1989-01-01
Transport codes use multigroup cross sections where neutrons are divided into broad energy groups, and the monoenergetic equation is solved for each group with a group-averaged cross section. Monte Carlo codes differ in that they allow the use of the most basic pointwise cross-section data directly in a calculation. Most of the first Monte Carlo codes were not able to utilize this feature, however, because of the memory limitations of early computers and the lack of pointwise cross-section data. Consequently, codes written in 1970s, such as KENO-IV and MORSE-C, were adapted to use multigroup cross-section sets similar to those used in the S{sub n} transport codes. With advances in computer memory capacities and the availability of pointwise cross-section sets, new Monte Carlo codes employing pointwise cross-section libraries, such as the Los Alamos National Laboratory code MCNP and the Lawrence Livermore National Laboratory (LLNL) code COG were developed for criticality, as well as radiation transport calculations. To compare pointwise and multigroup Monte Carlo methods for criticality benchmark calculations, this paper presents and evaluated the results from the KENO-IV, MORSE-C, MCNP, and COG codes. The critical experiments selected for benchmarking include LLNL fast metal systems and low-enriched uranium moderated and reflected systems.
Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES
NASA Astrophysics Data System (ADS)
Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean
2009-06-01
Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
NASA Astrophysics Data System (ADS)
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes.
Monte Carlo Method for Predicting a Physically Based Drop Size Distribution Evolution of a Spray
NASA Astrophysics Data System (ADS)
Tembely, Moussa; Lécot, Christian; Soucemarianadin, Arthur
2010-03-01
We report in this paper a method for predicting the evolution of a physically based drop size distribution of a spray, by coupling the Maximum Entropy Formalism and the Monte Carlo scheme. Using the discrete or continuous population balance equation, a Mass Flow Algorithm is formulated taking into account interactions between droplets via coalescence. After deriving a kernel for coalescence, we solve the time dependent drop size distribution equation using a Monte Carlo method. We apply the method to the spray of a new print-head known as a Spray On Demand (SOD) device; the process exploits ultrasonic spray generation via a Faraday instability where the fluid/structure interaction causing the instability is described by a modified Hamilton's principle. This has led to a physically-based approach for predicting the initial drop size distribution within the framework of the Maximum Entropy Formalism (MEF): a three-parameter generalized Gamma distribution is chosen by using conservation of mass and energy. The calculation of the drop size distribution evolution by Monte Carlo method shows the effect of spray droplets coalescence both on the number-based or volume-based drop size distributions.
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
NASA Astrophysics Data System (ADS)
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
Path-integral Monte Carlo method for the local Z2 Berry phase.
Motoyama, Yuichi; Todo, Synge
2013-02-01
We present a loop cluster algorithm Monte Carlo method for calculating the local Z(2) Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The "complex weight problem" caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point.
A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution
NASA Astrophysics Data System (ADS)
Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.
2015-11-01
In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].
Effect of porosity on electrical conduction of simulated nanostructures by Monte Carlo method
NASA Astrophysics Data System (ADS)
Dariani, R. S.; Abbas Hadi, N.
2016-09-01
Electrical conduction of deposited nanostructures is studied by oblique angle deposition. At first, a medium is simulated as nanocolumns by Monte Carlo method, then the effects of porosity on electron transport in 1D and 2D are investigated. The results show that more electrons transfer in media with low porosity, but with increasing porosity, the distance between nanocolumns expands and less electrons transfer. Therefore, the transport current reduces at the surface.
Multiplatform application for calculating a combined standard uncertainty using a Monte Carlo method
NASA Astrophysics Data System (ADS)
Niewinski, Marek; Gurnecki, Pawel
2016-12-01
The paper presents a new computer program for calculating a combined standard uncertainty. It implements the algorithm described in JCGM 101:20081 which is concerned with the use of a Monte Carlo method as an implementation of the propagation of distributions for uncertainty evaluation. The accuracy of the calculation has been obtained by using the high quality random number generators. The paper describes the main principles of the program and compares the obtained result with example problems presented in JCGM Supplement 1.
2005-03-01
synthetic aperature radar and radar detec- tion using both software modelling and mathematical analysis and techniques. vi DSTO–TR–1692 Contents 1...joined DSTO in 1990, where he has been part of research efforts in the areas of target radar cross section, digital signal processing, inverse ...Approximation of Integrals via Monte Carlo Methods, with an Application to Calculating Radar Detection Probabilities Graham V. Weinberg and Ross
Liu, Quan; Ramanujam, Nirmala
2007-04-01
A scaling Monte Carlo method has been developed to calculate diffuse reflectance from multilayered media with a wide range of optical properties in the ultraviolet-visible wavelength range. This multilayered scaling method employs the photon trajectory information generated from a single baseline Monte Carlo simulation of a homogeneous medium to scale the exit distance and exit weight of photons for a new set of optical properties in the multilayered medium. The scaling method is particularly suited to simulating diffuse reflectance spectra or creating a Monte Carlo database to extract optical properties of layered media, both of which are demonstrated in this paper. Particularly, it was found that the root-mean-square error (RMSE) between scaled diffuse reflectance, for which the anisotropy factor and refractive index in the baseline simulation were, respectively, 0.9 and 1.338, and independently simulated diffuse reflectance was less than or equal to 5% for source-detector separations from 200 to 1500 microm when the anisotropy factor of the top layer in a two-layered epithelial tissue model was varied from 0.8 to 0.99; in contrast, the RMSE was always less than 5% for all separations (from 0 to 1500 microm) when the anisotropy factor of the bottom layer was varied from 0.7 to 0.99. When the refractive index of either layer in the two-layered tissue model was varied from 1.3 to 1.4, the RMSE was less than 10%. The scaling method can reduce computation time by more than 2 orders of magnitude compared with independent Monte Carlo simulations.
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Chen, Xiaoqian; Parks, Geoffrey T.; Yao, Wen
2016-10-01
Ever-increasing demands of uncertainty-based design, analysis, and optimization in aerospace vehicles motivate the development of Monte Carlo methods with wide adaptability and high accuracy. This paper presents a comprehensive review of typical improved Monte Carlo methods and summarizes their characteristics to aid the uncertainty-based multidisciplinary design optimization (UMDO). Among them, Bayesian inference aims to tackle the problems with the availability of prior information like measurement data. Importance sampling (IS) settles the inconvenient sampling and difficult propagation through the incorporation of an intermediate importance distribution or sequential distributions. Optimized Latin hypercube sampling (OLHS) is a stratified sampling approach to achieving better space-filling and non-collapsing characteristics. Meta-modeling approximation based on Monte Carlo saves the computational cost by using cheap meta-models for the output response. All the reviewed methods are illustrated by corresponding aerospace applications, which are compared to show their techniques and usefulness in UMDO, thus providing a beneficial reference for future theoretical and applied research.
Methods for Monte Carlo simulation of the exospheres of the moon and Mercury
NASA Technical Reports Server (NTRS)
Hodges, R. R., Jr.
1980-01-01
A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable alternative, is described in detail, with separate discussions of the methods of specification of physical processes as probabalistic functions. Proof of the validity of the Monte Carlo exosphere simulation method is provided in the form of a comparison of analytic and Monte Carlo solutions to three classical, and analytically tractable, exosphere problems. One of the key phenomena in moonlike exosphere simulations, the distribution of velocities of the atoms leaving a regolith, depends mainly on the nature of collisions of free atoms with rocks. It is shown that on the moon and Mercury, elastic collisions of helium atoms with a Maxwellian distribution of vibrating, bound atoms produce a nearly Maxwellian distribution of helium velocities, despite the absence of speeds in excess of escape in the impinging helium velocity distribution.
Quantum Monte Carlo method applied to non-Markovian barrier transmission
NASA Astrophysics Data System (ADS)
Hupin, Guillaume; Lacroix, Denis
2010-01-01
In nuclear fusion and fission, fluctuation and dissipation arise because of the coupling of collective degrees of freedom with internal excitations. Close to the barrier, quantum, statistical, and non-Markovian effects are expected to be important. In this work, a new approach based on quantum Monte Carlo addressing this problem is presented. The exact dynamics of a system coupled to an environment is replaced by a set of stochastic evolutions of the system density. The quantum Monte Carlo method is applied to systems with quadratic potentials. In all ranges of temperature and coupling, the stochastic method matches the exact evolution, showing that non-Markovian effects can be simulated accurately. A comparison with other theories, such as Nakajima-Zwanzig or time-convolutionless, shows that only the latter can be competitive if the expansion in terms of coupling constant is made at least to fourth order. A systematic study of the inverted parabola case is made at different temperatures and coupling constants. The asymptotic passing probability is estimated by different approaches including the Markovian limit. Large differences with an exact result are seen in the latter case or when only second order in the coupling strength is considered, as is generally assumed in nuclear transport models. In contrast, if fourth order in the coupling or quantum Monte Carlo method is used, a perfect agreement is obtained.
Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy.
Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas
2012-04-01
Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.
Quasi-Monte Carlo methods for lattice systems: A first look
NASA Astrophysics Data System (ADS)
Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.
2014-03-01
We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte
A method to reduce the rejection rate in Monte Carlo Markov chains
NASA Astrophysics Data System (ADS)
Baldassi, Carlo
2017-03-01
We present a method for Monte Carlo sampling on systems with discrete variables (focusing in the Ising case), introducing a prior on the candidate moves in a Metropolis–Hastings scheme which can significantly reduce the rejection rate, called the reduced-rejection-rate (RRR) method. The method employs same probability distribution for the choice of the moves as rejection-free schemes such as the method proposed by Bortz, Kalos and Lebowitz (BKL) (1975 J. Comput. Phys. 17 10–8) however, it uses it as a prior in an otherwise standard Metropolis scheme: it is thus not fully rejection-free, but in a wide range of scenarios it is nearly so. This allows to extend the method to cases for which rejection-free schemes become inefficient, in particular when the graph connectivity is not sparse, but the energy can nevertheless be expressed as a sum of two components, one of which is computed on a sparse graph and dominates the measure. As examples of such instances, we demonstrate that the method yields excellent results when performing Monte Carlo simulations of quantum spin models in presence of a transverse field in the Suzuki–Trotter formalism, and when exploring the so-called robust ensemble which was recently introduced in Baldassi et al (2016 Proc. Natl Acad. Sci. 113 E7655–62). Our code for the Ising case is publicly available (RRR Monte Carlo code https://github.com/carlobaldassi/RRRMC.jl), and extensible to user-defined models: it provides efficient implementations of standard Metropolis, the RRR method, the BKL method (extended to the case of continuous energy specra), and the waiting time method by Dall and Sibani (2001 Comput. Phys. Commun. 141 260–7).
Modeling of near-continuum flows using the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Lohn, P. D.; Haflinger, D. E.; McGregor, R. D.; Behrens, H. W.
1990-06-01
The direct simulation Monte Carlo (DSMC) method is used to model the flow of a hypersonic stream about a wedge. The Knudsen number of 0.00075 puts the flow into the continuum category and hence is a challenge for the DSMC method. The modeled flowfield is shown to agree extremely well with the experimental measurements in the wedge wake taken by Batt (1967). This experimental confirmation serves as a rigorous validation of the DSMC method and provides guidelines for computations of near-continuum flows.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.
Generalized Moment Method for Gap Estimation and Quantum Monte Carlo Level Spectroscopy.
Suwa, Hidemaro; Todo, Synge
2015-08-21
We formulate a convergent sequence for the energy gap estimation in the worldline quantum Monte Carlo method. The ambiguity left in the conventional gap calculation for quantum systems is eliminated. Our estimation will be unbiased in the low-temperature limit, and also the error bar is reliably estimated. The level spectroscopy from quantum Monte Carlo data is developed as an application of the unbiased gap estimation. From the spectral analysis, we precisely determine the Kosterlitz-Thouless quantum phase-transition point of the spin-Peierls model. It is established that the quantum phonon with a finite frequency is essential to the critical theory governed by the antiadiabatic limit, i.e., the k=1 SU(2) Wess-Zumino-Witten model.
1985-12-01
00 S~DTIC FEB 1 VERIFICATION OF A MONTE CARLO SIMULATION METHOD TO FIND LOWER CONFIDENCE LIMITS FOR THE AVAILABILITY AND RELIABILITY OF MAINTAINED...SIMULATION METHOD TO FIND LOWER CONFIDENCE LIMITS FOR THE AVAILABILITY AND RELIABILITY OF MAINTAINED SYaTEMS THESIS Presented to the Faculty of the...CARLO SIMULATION METHOD TO FIND LOWER CONFIDENCE LIMITS FOR THE AVAILABILITY AND RELIABILITY OF MAINTAINED SYSTEMS I. Introduction The reliability
A patient-specific Monte Carlo dose-calculation method for photon beams.
Wang, L; Chui, C S; Lovelock, M
1998-06-01
A patient-specific, CT-based, Monte Carlo dose-calculation method for photon beams has been developed to correctly account for inhomogeneity in the patient. The method employs the EGS4 system to sample the interaction of radiation in the medium. CT images are used to describe the patient geometry and to determine the density and atomic number in each voxel. The user code (MCPAT) provides the data describing the incident beams, and performs geometry checking and energy scoring in patient CT images. Several variance reduction techniques have been implemented to improve the computation efficiency. The method was verified with measured data and other calculations, both in homogeneous and inhomogeneous media. The method was also applied to a lung treatment, where significant differences in dose distributions, especially in the low-density region, were observed when compared with the results using an equivalent pathlength method. Comparison of the DVHs showed that the Monte Carlo calculated plan predicted an underdose of nearly 20% to the target, while the maximum doses to the cord and the heart were increased by 25% and 33%, respectively. These results suggested that the Monte Carlo method may have an impact on treatment designs, and also that it can be used as a benchmark to assess the accuracy of other dose calculation algorithms. The computation time for the lung case employing five 15-MV wedged beams, with an approximate field size of 13 X 13 cm and the dose grid size of 0.375 cm, was less than 14 h on a 175-MHz computer with a standard deviation of 1.5% in the high-dose region.
Solution of the radiative transfer theory problems by the Monte Carlo method
NASA Technical Reports Server (NTRS)
Marchuk, G. I.; Mikhailov, G. A.
1974-01-01
The Monte Carlo method is used for two types of problems. First, there are interpretation problems of optical observations from meteorological satellites in the short wave part of the spectrum. The sphericity of the atmosphere, the propagation function, and light polarization are considered. Second, problems dealt with the theory of spreading narrow light beams. Direct simulation of light scattering and the mathematical form of medium radiation model representation are discussed, and general integral transfer equations are calculated. The dependent tests method, derivative estimates, and solution to the inverse problem are also considered.
Extrapolation method in the Monte Carlo Shell Model and its applications
Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio
2011-05-06
We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking {sup 56}Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as {sup 72}Ge with f5pg9-shell. The structure of {sup 72}Se is also studied including the discussion of the shape-coexistence phenomenon.
Simulation of polarization-sensitive optical coherence tomography images by a Monte Carlo method.
Meglinski, Igor; Kirillin, Mikhail; Kuzmin, Vladimir; Myllylä, Risto
2008-07-15
We introduce a new Monte Carlo (MC) method for simulating optical coherence tomography (OCT) images of complex multilayered turbid scattering media. We demonstrate, for the first time of our knowledge, the use of a MC technique to imitate two-dimensional polarization-sensitive OCT images with nonplanar boundaries of layers in the medium like a human skin. The simulation of polarized low-coherent optical radiation is based on the vector approach generalized from the iterative procedure of the solution of Bethe-Saltpeter equation. The performances of the developed method are demonstrated both for conventional and polarization-sensitive OCT modalities.
NASA Astrophysics Data System (ADS)
Sharma, Anupam; Long, Lyle N.
2004-10-01
A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.
NASA Astrophysics Data System (ADS)
Feng, X.; Lorton, C.
2017-03-01
This paper develops and analyzes an efficient Monte Carlo interior penalty discontinuous Galerkin (MCIP-DG) method for elastic wave scattering in random media. The method is constructed based on a multi-modes expansion of the solution of the governing random partial differential equations. It is proved that the mode functions satisfy a three-term recurrence system of partial differential equations (PDEs) which are nearly deterministic in the sense that the randomness only appears in the right-hand side source terms, not in the coefficients of the PDEs. Moreover, the same differential operator applies to all mode functions. A proven unconditionally stable and optimally convergent IP-DG method is used to discretize the deterministic PDE operator, an efficient numerical algorithm is proposed based on combining the Monte Carlo method and the IP-DG method with the $LU$ direct linear solver. It is shown that the algorithm converges optimally with respect to both the mesh size $h$ and the sampling number $M$, and practically its total computational complexity is only amount to solving very few deterministic elastic Helmholtz equations using the $LU$ direct linear solver. Numerically experiments are also presented to demonstrate the performance and key features of the proposed MCIP-DG method.
NASA Astrophysics Data System (ADS)
Davidenko, V. D.; Zinchenko, A. S.; Harchenko, I. K.
2016-12-01
Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.
Xie, Jun; Kim, Nak-Kyeong
2005-09-01
Statistical methods have been developed for finding local patterns, also called motifs, in multiple protein sequences. The aligned segments may imply functional or structural core regions. However, the existing methods often have difficulties in aligning multiple proteins when sequence residue identities are low (e.g., less than 25%). In this article, we develop a Bayesian model and Markov chain Monte Carlo (MCMC) methods for identifying subtle motifs in protein sequences. Specifically, a motif is defined not only in terms of specific sites characterized by amino acid frequency vectors, but also as a combination of secondary characteristics such as hydrophobicity, polarity, etc. Markov chain Monte Carlo methods are proposed to search for a motif pattern with high posterior probability under the new model. A special MCMC algorithm is developed, involving transitions between state spaces of different dimensions. The proposed methods were supported by a simulated study. It was then tested by two real datasets, including a group of helix-turn-helix proteins, and one set from the CATH Protein Structure Classification Database. Statistical comparisons showed that the new approach worked better than a typical Gibbs sampling approach which is based only on an amino acid model.
A method based on Monte Carlo simulation for the determination of the G(E) function.
Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie
2015-02-01
The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4″ × 4″ × 16″ NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %.
NASA Astrophysics Data System (ADS)
Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O.
2016-07-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z < 0.15) and compare our metallicity results with those from previous methods. We show that our metallicity estimates are consistent with previous methods but yield smaller statistical uncertainties. It should be noted that systematic uncertainties are not taken into account. We also offer visualization tools to assess the spread of the oxygen abundance in the different calibrators, as well as the shape of the estimated oxygen abundance distribution in each calibrator, and develop robust metrics for determining the appropriate Monte Carlo sample size. The code
Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark
2016-03-01
The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.
NASA Astrophysics Data System (ADS)
Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan
2016-04-01
We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Mcclarren, Ryan G; Urbatsch, Todd J
2008-01-01
In this note we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {Delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method alleviates both the nonphysical overheating that occurs in standard IMC when the time step is large and significantly diminishes the statistical noise in the solution.
NASA Astrophysics Data System (ADS)
Grolemund, Daniel Lee
The dissertation concerns the extraction, via signal processing, of structural information from the scattering of low megahertz, low power ultrasonic waves in two specific media of great practical interest--fiber reinforced composites and soft biological tissue. In fiber reinforced composites, this work represents the first measurement of second-order statistics in porous laminates, and the first application of Monte Carlo methods to acoustical scattering in composites. A numerical model of porous composites backscatter was derived which is suitable for direct numerical implementation. The model treats the total backscattered field as the result of a two-mode scattering process. In the first mode, the void-free composite is treated as a continuously varying medium in which the density and compressibility are functions of position. The second mode is the distribution of gas voids that failed to escape the material before gel, and are dealt with as discrete Rayleigh scatterers. Convolution techniques were developed that allowed the numerical model to reproduce the long range order seen in the void-free composite. The results of the Monte Carlo derivation were coded, and simulations run with data sets that duplicate the properties of the composite samples used in the study. In the area of tissue characterization, two leading methods have been proposed to extract structural data from the raw backscattered waveforms. Both techniques were developed from an understanding of the periodicities created by semi-regularly spaced, coherent scatterers. In the second half of the dissertation, a complete analytical and numerical treatment of these two techniques was done from a first principles approach. Computer simulations were performed to determine the general behavior of the algorithms. The main focus is on the envelope correlation spectrum, or ECS. Monte Carlo methods were employed to examine the signal-to-noise ratio of the ECS in terms of the variances of the backscattered
Shuttle vertical fin flowfield by the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Hueser, J. E.; Brock, F. J.; Melfi, L. T.
1985-01-01
The flow properties in a model flowfield, simulating the shuttle vertical fin, determined using the Direct Simulation Monte Carlo method. The case analyzed corresponds to an orbit height of 225 km with the freestream velocity vector orthogonal to the fin surface. Contour plots of the flowfield distributions of density, temperature, velocity and flow angle are presented. The results also include mean molecular collision frequency (which reaches 1/60 sec near the surface), collision frequency density (approaches 7 x 10 to the 18/cu m sec at the surface) and the mean free path (19 m at the surface).
A numerical study of rays in random media. [Monte Carlo method simulation
NASA Technical Reports Server (NTRS)
Youakim, M. Y.; Liu, C. H.; Yeh, K. C.
1973-01-01
Statistics of electromagnetic rays in a random medium are studied numerically by the Monte Carlo method. Two dimensional random surfaces with prescribed correlation functions are used to simulate the random media. Rays are then traced in these sample media. Statistics of the ray properties such as the ray positions and directions are computed. Histograms showing the distributions of the ray positions and directions at different points along the ray path as well as at given points in space are given. The numerical experiment is repeated for different cases corresponding to weakly and strongly random media with isotropic and anisotropic irregularities. Results are compared with those derived from theoretical investigations whenever possible.
Application of the direct simulation Monte Carlo method to the full shuttle geometry
NASA Technical Reports Server (NTRS)
Bird, G. A.
1990-01-01
A new set of programs has been developed for the application of the direct simulation Monte Carlo (or DSMC) method to rarefied gas flows with complex three-dimensional boundaries. The programs are efficient in terms of the computational load and also in terms of the effort required to set up particular cases. This efficiency is illustrated through computations of the flow about the Shuttle Orbiter. The general flow features are illustrated for altitudes from 170 to 100 km. Also, the computed lift-drag ratio during re-entry is compared with flight measurements.
Three-dimensional hypersonic rarefied flow calculations using direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Celenligil, M. Cevdet; Moss, James N.
1993-01-01
A summary of three-dimensional simulations on the hypersonic rarefied flows in an effort to understand the highly nonequilibrium flows about space vehicles entering the Earth's atmosphere for a realistic estimation of the aerothermal loads is presented. Calculations are performed using the direct simulation Monte Carlo method with a five-species reacting gas model, which accounts for rotational and vibrational internal energies. Results are obtained for the external flows about various bodies in the transitional flow regime. For the cases considered, convective heating, flowfield structure and overall aerodynamic coefficients are presented and comparisons are made with the available experimental data. The agreement between the calculated and measured results are very good.
A novel shielding scheme studied by the Monte Carlo method for electron beam radiotherapy.
Yue, Kun; Yao, Yuan; Dong, Xiaoqing; Luo, Wenyun
2013-03-01
Lead that has been employed widely for shielding in electron beam radiotherapy can produce bremsstrahlung photons during the shielding process. A novel shielding scheme with a two-layer structure has been studied using a Monte Carlo method in order to reduce this bremsstrahlung effect. Compared with the conventional lead, the novel shielding scheme, comprised of a Styrene-Ethylene-Butylene-Styrene Block Co-polymer (SEBS) above and lead below, can efficiently reduce the generation of bremsstrahlung while providing better shielding for incident electrons. Therefore, this novel shielding scheme may play an important role in future applications.
Green, P. L.; Worden, K.
2015-01-01
In this paper, the authors outline the general principles behind an approach to Bayesian system identification and highlight the benefits of adopting a Bayesian framework when attempting to identify models of nonlinear dynamical systems in the presence of uncertainty. It is then described how, through a summary of some key algorithms, many of the potential difficulties associated with a Bayesian approach can be overcome through the use of Markov chain Monte Carlo (MCMC) methods. The paper concludes with a case study, where an MCMC algorithm is used to facilitate the Bayesian system identification of a nonlinear dynamical system from experimentally observed acceleration time histories. PMID:26303916
Gu, M G; Kong, F H
1998-06-23
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
A new Monte Carlo method for dynamical evolution of non-spherical stellar systems
NASA Astrophysics Data System (ADS)
Vasiliev, Eugene
2015-01-01
We have developed a novel Monte Carlo method for simulating the dynamical evolution of stellar systems in arbitrary geometry. The orbits of stars are followed in a smooth potential represented by a basis-set expansion and perturbed after each timestep using local velocity diffusion coefficients from the standard two-body relaxation theory. The potential and diffusion coefficients are updated after an interval of time that is a small fraction of the relaxation time, but may be longer than the dynamical time. Thus, our approach is a bridge between the Spitzer's formulation of the Monte Carlo method and the temporally smoothed self-consistent field method. The primary advantages are the ability to follow the secular evolution of shape of the stellar system, and the possibility of scaling the amount of two-body relaxation to the necessary value, unrelated to the actual number of particles in the simulation. Possible future applications of this approach in galaxy dynamics include the problem of consumption of stars by a massive black hole in a non-spherical galactic nucleus, evolution of binary supermassive black holes, and the influence of chaos on the shape of galaxies, while for globular clusters it may be used for studying the influence of rotation.
Implementation of the direct S(α,β) method in the KENO Monte Carlo code
Hart, Shane W. D.; Maldonado, G. Ivan
2016-11-25
The Monte Carlo code KENO contains thermal scattering data for a wide variety of thermal moderators. These data are processed from Evaluated Nuclear Data Files (ENDF) by AMPX and stored as double differential probability distribution functions. The method examined in this study uses S(α,β) probability distribution functions derived from the ENDF data files directly instead of being converted to double differential cross sections. This allows the size of the cross section data on the disk to be reduced substantially amount. KENO has also been updated to allow interpolation in temperature on these data so that problems can be run atmore » any temperature. Results are shown for several simplified problems for a variety of moderators. In addition, benchmark models based on the KRITZ reactor in Sweden were run, and the results are compared with the previous versions of KENO without the direct S(α,β) method. Results from the direct S(α,β) method compare favorably with the original results obtained using the double differential cross sections. Finally, sampling the data increases the run-time of the Monte Carlo calculation, but memory usage is decreased substantially.« less
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.
Hunter, J. L.; Sutton, T. M.
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)
Investigation of a New Monte Carlo Method for the Transitional Gas Flow
NASA Astrophysics Data System (ADS)
Luo, X.; Day, Chr.
2011-05-01
The Direct Simulation Monte Carlo method (DSMC) is well developed for rarefied gas flow in transition flow regime when 0.01
Adapting phase-switch Monte Carlo method for flexible organic molecules
NASA Astrophysics Data System (ADS)
Bridgwater, Sally; Quigley, David
2014-03-01
The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.
Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.
Sormaz, Milos; Stamm, Tobias; Jenny, Patrick
2010-05-01
This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.
2011-04-01
Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC) is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI). We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Use of Bayesian Markov Chain Monte Carlo methods to model cost-of-illness data.
Cooper, Nicola J; Sutton, Alex J; Mugford, Miranda; Abrams, Keith R
2003-01-01
It is well known that the modeling of cost data is often problematic due to the distribution of such data. Commonly observed problems include 1) a strongly right-skewed data distribution and 2) a significant percentage of zero-cost observations. This article demonstrates how a hurdle model can be implemented from a Bayesian perspective by means of Markov Chain Monte Carlo simulation methods using the freely available software WinBUGS. Assessment of model fit is addressed through the implementation of two cross-validation methods. The relative merits of this Bayesian approach compared to the classical equivalent are discussed in detail. To illustrate the methods described, patient-specific non-health-care resource-use data from a prospective longitudinal study and the Norfolk Arthritis Register (NOAR) are utilized for 218 individuals with early inflammatory polyarthritis (IP). The NOAR database also includes information on various patient-level covariates.
Torsional diffusion Monte Carlo: A method for quantum simulations of proteins
NASA Astrophysics Data System (ADS)
Clary, David C.
2001-06-01
The quantum diffusion Monte Carlo (DMC) method is extended to the treatment of coupled torsional motions in proteins. A general algorithm and computer program has been developed by interfacing this torsional-DMC method with all-atom force-fields for proteins. The method gives the zero-point energy and atomic coordinates averaged over the coupled torsional motions in the quantum ground state of the protein. Application of the new algorithm is made to the proteins gelsolin (356 atoms and 142 torsions) and gp41-HIV (1101 atoms and 452 torsions). The results indicate that quantum-dynamical effects are important for the energies and geometries of typical proteins such as these.
Auxiliary-field quantum Monte Carlo method for strongly paired fermions
Carlson, J.; Gandolfi, Stefano; Schmidt, Kevin E.; Zhang, Shiwei
2011-12-15
We solve the zero-temperature unitary Fermi gas problem by incorporating a BCS importance function into the auxiliary-field quantum Monte Carlo method. We demonstrate that this method does not suffer from a sign problem and that it increases the efficiency of standard techniques by many orders of magnitude for strongly paired fermions. We calculate the ground-state energies exactly for unpolarized systems with up to 66 particles on lattices of up to 27{sup 3} sites, obtaining an accurate result for the universal parameter {xi}. We also obtain results for interactions with different effective ranges and find that the energy is consistent with a universal linear dependence on the product of the Fermi momentum and the effective range. This method will have many applications in superfluid cold atom systems and in both electronic and nuclear structures where pairing is important.
Uncertainty analysis using Monte Carlo method in the measurement of phase by ESPI
Anguiano Morales, Marcelino; Martinez, Amalia; Rayas, J. A.; Cordero, Raul R.
2008-04-15
A method for simultaneously measuring whole field in-plane displacements by using optical fiber and based on the dual-beam illumination principle electronic speckle pattern interferometry (ESPI) is presented in this paper. A set of single mode optical fibers and beamsplitter are employed to split the laser beam into four beams of equal intensity.One pair of fibers is utilized to illuminate the sample in the horizontal plane so it is sensitive only to horizontal in-plane displacement. Another pair of optical fibers is set to be sensitive only to vertical in-plane displacement. Each pair of optical fibers differs in longitude to avoid unwanted interference. By means of a Fourier-transform method of fringe-pattern analysis (Takeda method), we can obtain the quantitative data of whole field displacements. We found the uncertainty associated with the phases by mean of Monte Carlo-based technique.
Self-learning kinetic Monte Carlo method: Application to Cu(111)
NASA Astrophysics Data System (ADS)
Trushin, Oleg; Karim, Altaf; Kara, Abdelkader; Rahman, Talat S.
2005-09-01
We present a method of performing kinetic Monte Carlo simulations that does not require an a priori list of diffusion processes and their associated energetics and reaction rates. Rather, at any time during the simulation, energetics for all possible (single- or multiatom) processes, within a specific interaction range, are either computed accurately using a saddle-point search procedure, or retrieved from a database in which previously encountered processes are stored. This self-learning procedure enhances the speed of the simulations along with a substantial gain in reliability because of the inclusion of many-particle processes. Accompanying results from the application of the method to the case of two-dimensional Cu adatom-cluster diffusion and coalescence on Cu(111) with detailed statistics of involved atomistic processes and contributing diffusion coefficients attest to the suitability of the method for the purpose.
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; Barbier, Charlotte N.
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in process optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin
2015-01-01
A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes. PMID:26020778
Applications of the Monte Carlo method in nuclear physics using the GEANT4 toolkit
Moralles, Mauricio; Guimaraes, Carla C.; Menezes, Mario O.; Bonifacio, Daniel A. B.; Okuno, Emico; Guimaraes, Valdir; Murata, Helio M.; Bottaro, Marcio
2009-06-03
The capabilities of the personal computers allow the application of Monte Carlo methods to simulate very complex problems that involve the transport of particles through matter. Among the several codes commonly employed in nuclear physics problems, the GEANT4 has received great attention in the last years, mainly due to its flexibility and possibility to be improved by the users. Differently from other Monte Carlo codes, GEANT4 is a toolkit written in object oriented language (C++) that includes the mathematical engine of several physical processes, which are suitable to be employed in the transport of practically all types of particles and heavy ions. GEANT4 has also several tools to define materials, geometry, sources of radiation, beams of particles, electromagnetic fields, and graphical visualization of the experimental setup. After a brief description of the GEANT4 toolkit, this presentation reports investigations carried out by our group that involve simulations in the areas of dosimetry, nuclear instrumentation and medical physics. The physical processes available for photons, electrons, positrons and heavy ions were used in these simulations.
Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin
2015-01-01
A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10,000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes.
NASA Astrophysics Data System (ADS)
Lima, Ivan T., Jr.; Kalra, Anshul; Hernández-Figueroa, Hugo E.; Sherif, Sherif S.
2012-03-01
Computer simulations of light transport in multi-layered turbid media are an effective way to theoretically investigate light transport in tissue, which can be applied to the analysis, design and optimization of optical coherence tomography (OCT) systems. We present a computationally efficient method to calculate the diffuse reflectance due to ballistic and quasi-ballistic components of photons scattered in turbid media, which represents the signal in optical coherence tomography systems. Our importance sampling based Monte Carlo method enables the calculation of the OCT signal with less than one hundredth of the computational time required by the conventional Monte Carlo method. It also does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This method can be used to assess and optimize the performance of existing OCT systems, and it can also be used to design novel OCT systems.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
Estimating the Probability of Asteroid Collision with the Earth by the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Chernitsov, A. M.; Tamarov, V. A.; Barannikov, E. A.
2016-09-01
The commonly accepted method of estimating the probability of asteroid collision with the Earth is investigated on an example of two fictitious asteroids one of which must obviously collide with the Earth and the second must pass by at a dangerous distance from the Earth. The simplest Kepler model of motion is used. Confidence regions of asteroid motion are estimated by the Monte Carlo method. Two variants of constructing the confidence region are considered: in the form of points distributed over the entire volume and in the form of points mapped onto the boundary surface. The special feature of the multidimensional point distribution in the first variant of constructing the confidence region that can lead to zero probability of collision for bodies that collide with the Earth is demonstrated. The probability estimates obtained for even considerably smaller number of points in the confidence region determined by its boundary surface are free from this disadvantage.
Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.
Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L
2013-01-01
It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches.
Monte Carlo method for predicting of cardiac toxicity: hERG blocker compounds.
Gobbi, Marco; Beeg, Marten; Toropova, Mariya A; Toropov, Andrey A; Salmona, Mario
2016-05-27
The estimation of the cardiotoxicity of compounds is an important task for the drug discovery as well as for the risk assessment in ecological aspect. The experimental estimation of the above endpoint is complex and expensive. Hence, the theoretical computational methods are very attractive alternative of the direct experiment. A model for cardiac toxicity of 400 hERG blocker compounds (pIC50) is built up using the Monte Carlo method. Three different splits into the visible training set (in fact, the training set plus the calibration set) and invisible validation sets examined. The predictive potential is very good for all examined splits. The statistical characteristics for the external validation set are (i) the coefficient of determination r(2)=(0.90-0.93); and (ii) root-mean squared error s=(0.30-0.40).
DSMC calculations for the double ellipse. [direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Moss, James N.; Price, Joseph M.; Celenligil, M. Cevdet
1990-01-01
The direct simulation Monte Carlo (DSMC) method involves the simultaneous computation of the trajectories of thousands of simulated molecules in simulated physical space. Rarefied flow about the double ellipse for test case 6.4.1 has been calculated with the DSMC method of Bird. The gas is assumed to be nonreacting nitrogen flowing at a 30 degree incidence with respect to the body axis, and for the surface boundary conditions, the wall is assumed to be diffuse with full thermal accommodation and at a constant wall temperature of 620 K. A parametric study is presented that considers the effect of variations of computational domain, gas model, cell size, and freestream density on surface quantities.
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
Constrained Monte Carlo method and calculation of the temperature dependence of magnetic anisotropy
NASA Astrophysics Data System (ADS)
Asselin, P.; Evans, R. F. L.; Barker, J.; Chantrell, R. W.; Yanes, R.; Chubykalo-Fesenko, O.; Hinzke, D.; Nowak, U.
2010-08-01
We introduce a constrained Monte Carlo method which allows us to traverse the phase space of a classical spin system while fixing the magnetization direction. Subsequently we show the method’s capability to model the temperature dependence of magnetic anisotropy, and for bulk uniaxial and cubic anisotropies we recover the low-temperature Callen-Callen power laws in M . We also calculate the temperature scaling of the two-ion anisotropy in L10 FePt, and recover the experimentally observed M2.1 scaling. The method is newly applied to evaluate the temperature-dependent effective anisotropy in the presence of the Néel surface anisotropy in thin films with different easy-axis configurations. In systems having different surface and bulk easy axes, we show the capability to model the temperature-induced reorientation transition. The intrinsic surface anisotropy is found to follow a linear temperature behavior in a large range of temperatures.
A Variable Coefficient Method for Accurate Monte Carlo Simulation of Dynamic Asset Price
NASA Astrophysics Data System (ADS)
Li, Yiming; Hung, Chih-Young; Yu, Shao-Ming; Chiang, Su-Yun; Chiang, Yi-Hui; Cheng, Hui-Wen
2007-07-01
In this work, we propose an adaptive Monte Carlo (MC) simulation technique to compute the sample paths for the dynamical asset price. In contrast to conventional MC simulation with constant drift and volatility (μ,σ), our MC simulation is performed with variable coefficient methods for (μ,σ) in the solution scheme, where the explored dynamic asset pricing model starts from the formulation of geometric Brownian motion. With the method of simultaneously updated (μ,σ), more than 5,000 runs of MC simulation are performed to fulfills basic accuracy of the large-scale computation and suppresses statistical variance. Daily changes of stock market index in Taiwan and Japan are investigated and analyzed.
An improved random walk algorithm for the implicit Monte Carlo method
NASA Astrophysics Data System (ADS)
Keady, Kendra P.; Cleveland, Mathew A.
2017-01-01
In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in "fully-gray" form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities are a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2-4 compared to standard RW, and a factor of ∼3-6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.
NASA Astrophysics Data System (ADS)
Carmeli, Benny; Metiu, Horia
1987-02-01
We calculate the equilibrium properties of a system consisting of two strongly interacting quantum and classical subsystems, by using a fast Fourier transform method to evaluate the quantum contribution and a Monte Carlo method to evaluate the contribution of the classical part. The method is applied to a model relevant to tunneling problems.
Coherent-wave Monte Carlo method for simulating light propagation in tissue
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2016-03-01
Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.
Seismic wavefield imaging based on the replica exchange Monte Carlo method
NASA Astrophysics Data System (ADS)
Kano, Masayuki; Nagao, Hiromichi; Ishikawa, Daichi; Ito, Shin-ichi; Sakai, Shin'ichi; Nakagawa, Shigeki; Hori, Muneo; Hirata, Naoshi
2016-11-01
Earthquakes sometimes cause serious disasters not only directly by ground motion itself but also secondarily by infrastructure damage, particularly in densely populated urban areas that have capital functions. To reduce the number and severity of secondary disasters, it is important to evaluate seismic hazards rapidly by analyzing the seismic responses of individual structures to input ground motions. We propose a method that integrates physics-based and data-driven approaches in order to obtain a seismic wavefield for use as input to a seismic response analysis. The new contribution of this study is the use of the replica exchange Monte Carlo (REMC) method, which is one of the Markov chain Monte Carlo (MCMC) methods, for estimation of a seismic wavefield, together with a one-dimensional (1-D) local subsurface structure and source information. Numerical tests were conducted to verify the proposed method, using synthetic observation data obtained from analytical solutions for two horizontally-layered subsurface structure models. The geometries of the observation sites were determined from the dense seismic observation array called the Metropolitan Seismic Observation network (MeSO-net), which has been in operation in the Tokyo metropolitan area in Japan since 2007. The results of the numerical tests show that the proposed method is able to search the parameters related to the source and the local subsurface structure in a broader parameter space than the Metropolis method, which is an ordinary MCMC method. The proposed method successfully reproduces a seismic wavefield consistent with a true wavefield. In contrast, ordinary kriging, which is a classical data-driven interpolation method for spatial data, is hardly able to reproduce a true wavefield, even in the low frequency bands. This suggests that it is essential to employ both physics-based and data-driven approaches in seismic wavefield imaging, utilizing seismograms from a dense seismic array. The REMC method
Seismic wavefield imaging based on the replica exchange Monte Carlo method
NASA Astrophysics Data System (ADS)
Kano, Masayuki; Nagao, Hiromichi; Ishikawa, Daichi; Ito, Shin-ichi; Sakai, Shin'ichi; Nakagawa, Shigeki; Hori, Muneo; Hirata, Naoshi
2017-01-01
Earthquakes sometimes cause serious disasters not only directly by ground motion itself but also secondarily by infrastructure damage, particularly in densely populated urban areas that have capital functions. To reduce the number and severity of secondary disasters, it is important to evaluate seismic hazards rapidly by analysing the seismic responses of individual structures to input ground motions. We propose a method that integrates physics-based and data-driven approaches in order to obtain a seismic wavefield for use as input to a seismic response analysis. The new contribution of this study is the use of the replica exchange Monte Carlo (REMC) method, which is one of the Markov chain Monte Carlo (MCMC) methods, for estimation of a seismic wavefield, together with a 1-D local subsurface structure and source information. Numerical tests were conducted to verify the proposed method, using synthetic observation data obtained from analytical solutions for two horizontally layered subsurface structure models. The geometries of the observation sites were determined from the dense seismic observation array called the Metropolitan Seismic Observation network, which has been in operation in the Tokyo metropolitan area in Japan since 2007. The results of the numerical tests show that the proposed method is able to search the parameters related to the source and the local subsurface structure in a broader parameter space than the Metropolis method, which is an ordinary MCMC method. The proposed method successfully reproduces a seismic wavefield consistent with a true wavefield. In contrast, ordinary kriging, which is a classical data-driven interpolation method for spatial data, is hardly able to reproduce a true wavefield, even in the low frequency bands. This suggests that it is essential to employ both physics-based and data-driven approaches in seismic wavefield imaging, utilizing seismograms from a dense seismic array. The REMC method, which provides not only
Barrier heights of hydrogen-transfer reactions with diffusion quantum monte carlo method.
Zhou, Xiaojun; Wang, Fan
2017-04-30
Hydrogen-transfer reactions are an important class of reactions in many chemical and biological processes. Barrier heights of H-transfer reactions are underestimated significantly by popular exchange-correlation functional with density functional theory (DFT), while coupled-cluster (CC) method is quite expensive and can be applied only to rather small systems. Quantum Monte-Carlo method can usually provide reliable results for large systems. Performance of fixed-node diffusion quantum Monte-Carlo method (FN-DMC) on barrier heights of the 19 H-transfer reactions in the HTBH38/08 database is investigated in this study with the trial wavefunctions of the single-Slater-Jastrow form and orbitals from DFT using local density approximation. Our results show that barrier heights of these reactions can be calculated rather accurately using FN-DMC and the mean absolute error is 1.0 kcal/mol in all-electron calculations. Introduction of pseudopotentials (PP) in FN-DMC calculations improves efficiency pronouncedly. According to our results, error of the employed PPs is smaller than that of the present CCSD(T) and FN-DMC calculations. FN-DMC using PPs can thus be applied to investigate H-transfer reactions involving larger molecules reliably. In addition, bond dissociation energies of the involved molecules using FN-DMC are in excellent agreement with reference values and they are even better than results of the employed CCSD(T) calculations using the aug-cc-pVQZ basis set. © 2017 Wiley Periodicals, Inc.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.
2012-06-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.
Numerical study of reflectance imaging using a parallel Monte Carlo method.
Chen, Cheng; Lu, Jun Q; Li, Kai; Zhao, Suisheng; Brock, R Scott; Hu, Xin-Hua
2007-07-01
Reflectance imaging of biological tissues with visible and near-infrared light has the significant potential to provide a noninvasive and safe imaging modality for diagnosis of dysplastic and malignant lesions in the superficial tissue layers. The difficulty in the extraction of optical and structural parameters lies in the lack of efficient methods for accurate modeling of light scattering in biological tissues of turbid nature. We present a parallel Monte Carlo method for accurate and efficient modeling of reflectance images from turbid tissue phantoms. A parallel Monte Carlo code has been developed with the message passing interface and evaluated on a computing cluster with 16 processing elements. The code was validated against the solutions of the radiative transfer equation on the bidirectional reflection and transmission functions. With this code we investigated numerically the dependence of reflectance image on the imaging system and phantom parameters. The contrasts of reflectance images were found to be nearly independent of the numerical aperture (NA) of the imaging camera despite the fact that reflectance depends on the NA. This enables efficient simulations of the reflectance images using an NA at 1.00. Using heterogeneous tissue phantoms with an embedded region simulating a lesion, we investigated the correlation between the reflectance image profile or contrast and the phantom parameters. It has been shown that the image contrast approaches 0 when the single-scattering albedos of the two regions in the heterogeneous phantoms become matched. Furthermore, a zone of detection has been demonstrated for determination of the thickness of the embedded region and optical parameters from the reflectance image profile and contrast. Therefore, the utility of the reflectance imaging method with visible and near-infrared light has been firmly established. We conclude from these results that the optical parameters of the embedded region can be determined inversely
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Determination of phase equilibria in confined systems by open pore cell Monte Carlo method.
Miyahara, Minoru T; Tanaka, Hideki
2013-02-28
We present a modification of the molecular dynamics simulation method with a unit pore cell with imaginary gas phase [M. Miyahara, T. Yoshioka, and M. Okazaki, J. Chem. Phys. 106, 8124 (1997)] designed for determination of phase equilibria in nanopores. This new method is based on a Monte Carlo technique and it combines the pore cell, opened to the imaginary gas phase (open pore cell), with a gas cell to measure the equilibrium chemical potential of the confined system. The most striking feature of our new method is that the confined system is steadily led to a thermodynamically stable state by forming concave menisci in the open pore cell. This feature of the open pore cell makes it possible to obtain the equilibrium chemical potential with only a single simulation run, unlike existing simulation methods, which need a number of additional runs. We apply the method to evaluate the equilibrium chemical potentials of confined nitrogen in carbon slit pores and silica cylindrical pores at 77 K, and show that the results are in good agreement with those obtained by two conventional thermodynamic integration methods. Moreover, we also show that the proposed method can be particularly useful for determining vapor-liquid and vapor-solid coexistence curves and the triple point of the confined system.
NASA Astrophysics Data System (ADS)
Ma, X. B.; Qiu, R. M.; Chen, Y. X.
2017-02-01
Uncertainties regarding fission fractions are essential in understanding antineutrino flux predictions in reactor antineutrino experiments. A new Monte Carlo-based method to evaluate the covariance coefficients between isotopes is proposed. The covariance coefficients are found to vary with reactor burnup and may change from positive to negative because of balance effects in fissioning. For example, between 235U and 239Pu, the covariance coefficient changes from 0.15 to -0.13. Using the equation relating fission fraction and atomic density, consistent uncertainties in the fission fraction and covariance matrix were obtained. The antineutrino flux uncertainty is 0.55%, which does not vary with reactor burnup. The new value is about 8.3% smaller.
A Monte Carlo Method for Projecting Uncertainty in 2D Lagrangian Trajectories
NASA Astrophysics Data System (ADS)
Robel, A.; Lozier, S.; Gary, S. F.
2009-12-01
In this study, a novel method is proposed for modeling the propagation of uncertainty due to subgrid-scale processes through a Lagrangian trajectory advected by ocean surface velocities. The primary motivation and application is differentiating between active and passive trajectories for sea turtles as observed through satellite telemetry. A spatiotemporal launch box is centered on the time and place of actual launch and populated with launch points. Synthetic drifters are launched at each of these locations, adding, at each time step along the trajectory, Monte Carlo perturbations in velocity scaled to the natural variability of the velocity field. The resulting trajectory cloud provides a dynamically evolving density field of synthetic drifter locations that represent the projection of subgrid-scale uncertainty out in time. Subsequently, by relaunching synthetic drifters at points along the trajectory, plots are generated in a daisy chain configuration of the “most likely passive pathways” for the drifter.
Monte Carlo Method Applied to the ABV Model of an Interconnect Alloy
NASA Astrophysics Data System (ADS)
Dahoo, P. R.; Linares, J.; Chiruta, D.; Chong, C.; Pougnet, P.; Meis, C.; El Hami, A.
2016-08-01
A Monte Carlo (MC) simulation of a 2D microscopic ABV (metal A, metal B and void V) Ising model of an interconnect alloy is performed by taking into account results of Finite Element methods (FEM) calculations on correlated void-thermal effects. The evolution of a homogeneous structure of a binary alloy containing a small percentage of voids is studied with temperature cycling. The diffusion of voids and segregation of A type or B type metals is a function of the relative interaction energy of the different pairs AA, BB, AB, AV and BV, the initial concentrations of A, B and V and local heating effect due to the presence of clusters of voids. Voids segregates in a matrix of A type, of B type or AB type and form large localized clusters or smaller delocalized ones of different shapes.
NASA Technical Reports Server (NTRS)
Haviland, J. K.
1974-01-01
The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.
Numerical solution of DGLAP equations using Laguerre polynomials expansion and Monte Carlo method.
Ghasempour Nesheli, A; Mirjalili, A; Yazdanpanah, M M
2016-01-01
We investigate the numerical solutions of the DGLAP evolution equations at the LO and NLO approximations, using the Laguerre polynomials expansion. The theoretical framework is based on Furmanski et al.'s articles. What makes the content of this paper different from other works, is that all calculations in the whole stages to extract the evolved parton distributions, are done numerically. The employed techniques to do the numerical solutions, based on Monte Carlo method, has this feature that all the results are obtained in a proper wall clock time by computer. The algorithms are implemented in FORTRAN and the employed coding ideas can be used in other numerical computations as well. Our results for the evolved parton densities are in good agreement with some phenomenological models. They also indicate better behavior with respect to the results of similar numerical calculations.
Monte Carlo method for critical systems in infinite volume: The planar Ising model.
Herdeiro, Victor; Doyon, Benjamin
2016-10-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.
2013-07-01
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)
QSAR Differential Model for Prediction of SIRT1 Modulation using Monte Carlo Method.
Kumar, Ashwani; Chauhan, Shilpi
2017-03-01
Silent information regulator 2 homologue one (SIRT1) modulators have therapeutic potential for a number of diseases like cardiovascular, metabolic, inflammatory and age related disorders. Here, we have studied both activators and inhibitors of SIRT1 and constructed differential quantitative structure activity relationship (QSAR) models using CORAL software by Monte Carlo optimization method and SMILES notation. 3 splits divided into 3 subsets: sub-training, calibration and test sets, were examined and validated with a prediction set. All the described models were statistically significant models. The values of sensitivity, specificity, accuracy and Matthews' correlation coefficient for the validation set of best model were 1.0000, 0.8889, 0.9524 and 0.9058, respectively. In mechanistic interpretation, structural features important for SIRT1 activation and inhibition have been defined.
Armas-Pérez, Julio C; Hernández-Ortiz, Juan P; de Pablo, Juan J
2015-12-28
A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.
Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows
NASA Astrophysics Data System (ADS)
Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.
2014-12-01
For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.
A Monte Carlo Method for Summing Modeled and Background Pollutant Concentrations.
Dhammapala, Ranil; Bowman, Clint; Schulte, Jill
2017-02-23
Air quality analyses for permitting new pollution sources often involve modeling dispersion of pollutants using models like AERMOD. Representative background pollutant concentrations must be added to modeled concentrations to determine compliance with air quality standards. Summing 98(th) (or 99(th)) percentiles of two independent distributions that are unpaired in time, overestimates air quality impacts and could needlessly burden sources with restrictive permit conditions. This problem is exacerbated when emissions and background concentrations peak during different seasons. Existing methods addressing this matter either require much input data, disregard source and background seasonality, or disregard the variability of the background by utilizing a single concentration for each season, month, hour-of-day, day-of-week or wind direction. Availability of representative background concentrations are another limitation. Here we report on work to improve permitting analyses, with the development of (1) daily gridded, background concentrations interpolated from 12km-CMAQ forecasts and monitored data. A two- step interpolation reproduced measured background concentrations to within 6.2%; and (2) a Monte Carlo (MC) method to combine AERMOD output and background concentrations while respecting their seasonality. The MC method randomly combines, with replacement, data from the same months, and calculates 1000 estimates of the 98(th) or 99(th) percentiles. The design concentration of background + new source is the median of these 1000 estimates. We found that the AERMOD design value (DV) + background DV lay at the upper end of the distribution of these thousand 99(th) percentiles, while measured DVs were at the lower end. Our MC method sits between these two metrics and is sufficiently protective of public health in that it overestimates design concentrations somewhat. We also calculated probabilities of exceeding specified thresholds at each receptor, better informing
Physics and computer architecture informed improvements to the Implicit Monte Carlo method
NASA Astrophysics Data System (ADS)
Long, Alex Roberts
The Implicit Monte Carlo (IMC) method has been a standard method for thermal radiative transfer for the past 40 years. In this time, the hydrodynamics methods that are coupled to IMC have evolved and improved, as have the supercomputers used to run large simulations with IMC. Several modern hydrodynamics methods use unstructured non-orthogonal meshes and high-order spatial discretizations. The IMC method has been used primarily with simple Cartesian meshes and always has a first order spatial discretization. Supercomputers are now made up of compute nodes that have a large number of cores. Current IMC parallel methods have significant problems with load imbalance. To utilize many core systems, algorithms must move beyond simple spatial decomposition parallel algorithms. To make IMC better suited for large scale multiphysics simulations in high energy density physics, new spatial discretizations and parallel strategies are needed. Several modifications are made to the IMC method to facilitate running on node-centered, unstructured tetrahedral meshes. These modifications produce results that converge to the expected solution under mesh refinement. A new finite element IMC method is also explored on these meshes, which offer a simulation runtime benefit but does not perform correctly in the diffusion limit. A parallel algorithm that utilizes on-node parallelism and respects memory hierarchies is studied. This method scales almost linearly when using physical cores on a node and benefits from multiple threads per core. A multi-compute node algorithm for domain decomposed IMC that passes mesh data instead of particles is explored as a means to solve load balance issues. This method scales better than the particle passing method on highly scattering problems with short time steps.
NASA Astrophysics Data System (ADS)
Cheng, Sara; Qiu, Liming; Cheng, K.; Vaughn, Mark
2011-10-01
The distribution statistics of the surface area, volume and voids of lipid molecules are important parameters to characterize the structures of self-assembling lipid membranes. Traditional methods are mostly based on various assumptions of the thickness of the lipid membrane and the volumes of certain types of lipid molecules. However, those methods usually lead to an over- or underestimation of the average surface area of lipid molecules when compared to the experimental results of the pure lipid systems. We developed a new Monte Carlo method that is able to estimate the distributions and averages of surface area, volume and void space of the lipid molecules in the absence and presence of proteins of the MD simulation results of lipid membranes at the atomistic scale. We successfully validated our new method on an ordered hard-sphere system and on a phospholipid/cholesterol binary lipid system, all with known structural parameters. Using this new method, the structural perturbation of the conformal annular lipids in close proximity to the embedded protein in a lipid/protein system will also be presented.
A deterministic alternative to the full configuration interaction quantum Monte Carlo method.
Tubman, Norm M; Lee, Joonho; Takeshita, Tyler Y; Head-Gordon, Martin; Whaley, K Birgitta
2016-07-28
Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2.
NASA Astrophysics Data System (ADS)
Yoo, Hongki; Kang, Dong-Kyun; Lee, SeungWoo; Lee, Junhee; Gweon, Dae-Gab
2004-07-01
The errors can cause the serious loss of the performance of a precision machine system. In this paper, we propose the method of allocating the alignment tolerances of the components and apply this method to Confocal Scanning Microscopy (CSM) to get the optimal tolerances. CSM uses confocal aperture, which blocks the out-of-focus information. Thus, it provides images with superior resolution and has unique property of optical sectioning. Recently, due to these properties, it has been widely used for measurement in biological field, medical science, material science and semiconductor industry. In general, tight tolerances are required to maintain the performance of a system, but a high cost of manufacturing and assembling is required to preserve the tight tolerances. The purpose of allocating the optimal tolerances is minimizing the cost while keeping the performance of the system. In the optimal problem, we set the performance requirements as constraints and maximized the tolerances. The Monte Carlo Method, a statistical simulation method, is used in tolerance analysis. Alignment tolerances of optical components of the confocal scanning microscopy are optimized, to minimize the cost and to maintain the observation performance of the microscopy. We can also apply this method to the other precision machine system.
Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method
NASA Astrophysics Data System (ADS)
Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.
2000-07-01
This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.
A deterministic alternative to the full configuration interaction quantum Monte Carlo method
NASA Astrophysics Data System (ADS)
Tubman, Norm M.; Lee, Joonho; Takeshita, Tyler Y.; Head-Gordon, Martin; Whaley, K. Birgitta
2016-07-01
Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2.
Uniform-acceptance force-bias Monte Carlo method with time scale to study solid-state diffusion
NASA Astrophysics Data System (ADS)
Mees, Maarten J.; Pourtois, Geoffrey; Neyts, Erik C.; Thijsse, Barend J.; Stesmans, André
2012-04-01
Monte Carlo (MC) methods have a long-standing history as partners of molecular dynamics (MD) to simulate the evolution of materials at the atomic scale. Among these techniques, the uniform-acceptance force-bias Monte Carlo (UFMC) method [G. Dereli, Mol. Simul.10.1080/08927029208022490 8, 351 (1992)] has recently attracted attention [M. Timonova , Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.81.144107 81, 144107 (2010)] thanks to its apparent capacity of being able to simulate physical processes in a reduced number of iterations compared to classical MD methods. The origin of this efficiency remains, however, unclear. In this work we derive a UFMC method starting from basic thermodynamic principles, which leads to an intuitive and unambiguous formalism. The approach includes a statistically relevant time step per Monte Carlo iteration, showing a significant speed-up compared to MD simulations. This time-stamped force-bias Monte Carlo (tfMC) formalism is tested on both simple one-dimensional and three-dimensional systems. Both test-cases give excellent results in agreement with analytical solutions and literature reports. The inclusion of a time scale, the simplicity of the method, and the enhancement of the time step compared to classical MD methods make this method very appealing for studying the dynamics of many-particle systems.
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
Sellier, J.M. Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practically unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Sutton, T.M.; Brown, F.B.
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
Zou Yu; Kavousanakis, Michail E.; Kevrekidis, Ioannis G.; Fox, Rodney O.
2010-07-20
The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.
Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.
2017-02-01
We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data – equatorial coordinates of the meteor head in a sequence of TV frames – in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter – geocentric velocity of a meteor – which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
Variational method for estimating the rate of convergence of Markov-chain Monte Carlo algorithms.
Casey, Fergal P; Waterfall, Joshua J; Gutenkunst, Ryan N; Myers, Christopher R; Sethna, James P
2008-10-01
We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is applicable to problems with continuous state spaces. We apply the method to one dimensional examples with Gaussian and quartic target densities, and we contrast the performance of the random walk Metropolis-Hastings algorithm with a "smart" variant that incorporates gradient information into the trial moves, a generalization of the Metropolis adjusted Langevin algorithm. We find that the variational method agrees quite closely with numerical simulations. We also see that the smart MCMC algorithm often fails to converge geometrically in the tails of the target density except in the simplest case we examine, and even then care must be taken to choose the appropriate scaling of the deterministic and random parts of the proposed moves. Again, this calls into question the utility of smart MCMC in more complex problems. Finally, we apply the same method to approximate the rate of convergence in multidimensional Gaussian problems with and without importance sampling. There we demonstrate the necessity of importance sampling for target densities which depend on variables with a wide range of scales.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Density-of-states based Monte Carlo methods for simulation of biological systems
NASA Astrophysics Data System (ADS)
Rathore, Nitin; Knotts, Thomas A.; de Pablo, Juan J.
2004-03-01
We have developed density-of-states [1] based Monte Carlo techniques for simulation of biological molecules. Two such methods are discussed. The first, Configurational Temperature Density of States (CTDOS) [2], relies on computing the density of states of a peptide system from knowledge of its configurational temperature. The reciprocal of this intrinsic temperature, computed from instantaneous configurational information of the system, is integrated to arrive at the density of states. The method shows improved efficiency and accuracy over techniques that are based on histograms of random visits to distinct energy states. The second approach, Expanded Ensemble Density of States (EXEDOS), incorporates elements from both the random walk method and the expanded ensemble formalism. It is used in this work to study mechanical deformation of model peptides. Results are presented in the form of force-extension curves and the corresponding potentials of mean force. The application of this proposed technique is further generalized to other biological systems; results will be presented for ion transport through protein channels, base stacking in nucleic acids and hybridization of DNA strands. [1]. F. Wang and D. P. Landau, Phys. Rev. Lett., 86, 2050 (2001). [2]. N. Rathore, T. A. Knotts IV and J. J. de Pablo, Biophys. J., Dec. (2003).
NASA Technical Reports Server (NTRS)
Hueser, J. E.; Brock, F. J.; Melfi, L. T., Jr.; Bird, G. A.
1984-01-01
A new solution procedure has been developed to analyze the flowfield properties in the vicinity of the Inertial Upper Stage/Spacecraft during the 1st stage (SRMI) burn. Continuum methods are used to compute the nozzle flow and the exhaust plume flowfield as far as the boundary where the breakdown of translational equilibrium leaves these methods invalid. The Direct Simulation Monte Carlo (DSMC) method is applied everywhere beyond this breakdown boundary. The flowfield distributions of density, velocity, temperature, relative abundance, surface flux density, and pressure are discussed for each species for 2 sets of boundary conditions: vacuum and freestream. The interaction of the exhaust plume and the freestream with the spacecraft and the 2-stream direct interaction are discussed. The results show that the low density, high velocity, counter flowing free-stream substantially modifies the flowfield properties and the flux density incident on the spacecraft. A freestream bow shock is observed in the data, located forward of the high density region of the exhaust plume into which the freestream gas does not penetrate. The total flux density incident on the spacecraft, integrated over the SRM1 burn interval is estimated to be of the order of 10 to the 22nd per sq m (about 1000 atomic layers).
Quantum Monte Carlo Method for Heavy Atomic and Molecular Systems with Spin-Orbit Interactions
NASA Astrophysics Data System (ADS)
Melton, Cody; Mitas, Lubos
We present a new quantum Monte Carlo (QMC) method that can treat spin-orbit and other types of spin-depentent interactions explicitly. It is based on generalization of the fixed-phase and projection of the nonlocal operators with spinor trial wave functions. For testing the method we calculate several atomic and molecular systems such as Bi, W, Pb, PbH and PbO, some of them with both large- and small-core pseudopotentials. We validate the quality of the results against other correlated methods such as configuration interaction in two-component formalism. We find excellent agreement with extrapolated values for the total energies and we are able to reliably reproduce experimental values of excitation energies, electron affinity and molecular binding. We show that in order to obtain the agreement with experimental values the explicit inclusion of the spin-orbit interactions is crucial. U.S. D.O.E. grant de-sc0012314 and NERSC Contract No. DE-AC02-05CH11231.
Recent developments in quantum Monte Carlo methods for electronic structure of atomic clusters
NASA Astrophysics Data System (ADS)
Mitas, Lubos
2004-03-01
Recent developments of quantum Monte Carlo (QMC) for electronic structure calculations of clusters, other nanomaterials and quantum systems will be reviewed. QMC methodology is based on a combination of analytical insights about properties of exact wavefunctions, explicit treatment of electron-electron correlation and robustness of computational stochastic techniques. In the course of QMC development for calculations of real materials, small and medium size clusters proved to be invaluable systems both for testing and for revealing unique insights into electron correlation effects in nanostructured materials. The method shows remarkable accuracy which will be demonstrated on calculations of magnetic states of transition metal atoms encapsulated in silicon cluster cages, optical excitations in quantum nanodots and molecules and on studies of reactions in biomolecular metallic centers. Indeed, in some cases QMC turned out to be the only feasible method to provide the necessary accuracy. I will also discuss current QMC developments in using correlated sampling techniques for efficient evaluation of energy differences, efforts to reach beyond the fixed-node approximation and on incorporating QMC methods into multi-scale simulation approaches. In collaboration with P. Sen, L.K. Wagner, Z.M. Helms, M. Bajdich, G. Drobny, and J.C. Grossman. Supported by NSF, ONR and DARPA.
NASA Astrophysics Data System (ADS)
Takoudis, G.; Xanthos, S.; Clouvas, A.; Potiriadis, C.
2010-02-01
Portal monitoring radiation detectors are commonly used by steel industries in the probing and detection of radioactivity contamination in scrap metal. These portal monitors typically consist of polystyrene or polyvinyltoluene (PVT) plastic scintillating detectors, one or more photomultiplier tubes (PMT), an electronic circuit, a controller that handles data output and manipulation linking the system to a display or a computer with appropriate software and usually, a light guide. Such a portal used by the steel industry was opened and all principal materials were simulated using a Monte Carlo simulation tool (MCNP4C2). Various source-detector configurations were simulated and validated by comparison with corresponding measurements. Subsequently an experiment with a uniform cargo along with two sets of experiments with different scrap loads and radioactive sources ( 137Cs, 152Eu) were performed and simulated. Simulated and measured results suggested that the nature of scrap is crucial when simulating scrap load-detector experiments. Using the same simulating configuration, a series of runs were performed in order to estimate minimum alarm activities for 137Cs, 60Co and 192Ir sources for various simulated scrap densities. The minimum alarm activities as well as the positions in which they were recorded are presented and discussed.
NASA Astrophysics Data System (ADS)
Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.
2014-06-01
Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.
Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I
2014-06-07
Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Guo, Fan
2015-11-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Uncertainty quantification through the Monte Carlo method in a cloud computing setting
NASA Astrophysics Data System (ADS)
Cunha, Americo; Nasser, Rafael; Sampaio, Rubens; Lopes, Hélio; Breitman, Karin
2014-05-01
The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.
Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.
Mathiowetz, A. M.; Goddard, W. A.
1995-01-01
Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
Godfrey, Andrew T; Gehin, Jess C; Bekar, Kursat B; Celik, Cihangir
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.
A new Monte Carlo method for getting the density of states of atomic cluster systems.
Soudan, J-M; Basire, M; Mestdagh, J-M; Angelié, C
2011-10-14
A novel Monte Carlo flat histogram algorithm is proposed to get the classical density of states in terms of the potential energy, g(E(p)), for systems with continuous variables such as atomic clusters. It aims at avoiding the long iterative process of the Wang-Landau method and controlling carefully the convergence, but keeping the ability to overcome energy barriers. Our algorithm is based on a preliminary mapping in a series of points (called a σ-mapping), obtained by a two-parameter local probing of g(E(p)), and it converges in only two subsequent reweighting iterations on large intervals. The method is illustrated on the model system of a 432 atom cluster bound by a Rydberg type potential. Convergence properties are first examined in detail, particularly in the phase transition zone. We get g(E(p)) varying by a factor 10(3700) over the energy range [0.01 < E(p) < 6000 eV], covered by only eight overlapping intervals. Canonical quantities are derived, such as the internal energy U(T) and the heat capacity C(V)(T). This reveals the solid to liquid phase transition, lying in our conditions at the triple point. This phase transition is further studied by computing a Lindemann-Berry index, the atomic cluster density n(r), and the pressure, demonstrating the progressive surface melting at this triple point. Some limited results are also given for 1224 and 4044 atom clusters.
Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Good, Brian; Ferrante, John
1996-01-01
Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Quiet Monte Carlo Method for the Simulation of Semi-Collisional Plasmas
NASA Astrophysics Data System (ADS)
Albright, Brian J.
2001-10-01
The modeling of collisions among particles in a plasma poses a challenge for computer simulation. Traditional simulation methods are able to model well the extremes of highly collisional plasmas (MHD and Hall-MHD simulations) and collisionless plasmas (particle-in-cell simulations). However, the intermediate, semi-collisional regime is more problematic. In semi-collisional plasmas, the collision times are comparable to the dynamical time scales of interest in the system and the collisionality often varies as a function of time or position. Some examples include interpenetrating laser-produced plasmas, tokamak plasmas near edges and divertors, plasmas in the Earth's ionosphere, cometary exospheres, and the interstellar medium. Some PIC plasma simulations have been developed that can, in a limited way, model collisions. These include the early work of Shanny et al. [Phys. Fluids 10, 1281 (1967)], the binary collision model of Takizuka and Abe [J. Comput. Phys. 25 205 (1977)], and the collision field method of Jones et al. [J. Comput. Phys. 117, 194 (1996)]. In this talk, a new approach to particle simulation, called ``quiet direct simulation Monte Carlo'' (QDSMC), will be described that can, in principle, treat plasmas with arbitrary and arbitrarily varying collisionality. The essence of the QDSMC approach is the use of carefully chosen weights for the particles (e.g., Gauss-Hermite, for Maxwellian distributions), which are destroyed each time step after the particle information is deposited onto the grid and then reconstructed at the beginning of the next time step. The method overcomes the usual limitations of particle methods: limited dynamical range and excessive statistical noise. The QDSMC method will be discussed, as will its application as ``proof of principle'' to diffusion, hydrodynamics, and radiation transport. A QDSMC formulation of collisional, kinetic plasma simulation will be outlined, and preliminary results will be presented.
A MONTE-CARLO Method for Estimating Stellar Photometric Metallicity Distributions
NASA Astrophysics Data System (ADS)
Gu, Jiayin; Du, Cuihua; Jing, Yingjie; Zuo, Wenbo
2016-07-01
Based on the Sloan Digital Sky Survey, we develop a new Monte-Carlo-based method to estimate the photometric metallicity distribution function (MDF) for stars in the Milky Way. Compared with other photometric calibration methods, this method enables a more reliable determination of the MDF, particularly at the metal-poor and metal-rich ends. We present a comparison of our new method with a previous polynomial-based approach and demonstrate its superiority. As an example, we apply this method to main-sequence stars with 0.2\\lt g-r\\lt 0.6, 6 kpc < R < 9 kpc, and in different intervals in height above the plane, | Z| . The MDFs for the selected stars within two relatively local intervals (0.8 kpc \\lt | Z| \\lt 1.2 {{kpc}}, 1.5 kpc \\lt | Z| \\lt 2.5 kpc) can be well-fit by two Gaussians with peaks at [Fe/H] ≈ -0.6 and -1.2, respectively: one associated with the disk system and the other with the halo. The MDFs for the selected stars within two more distant intervals (3 kpc \\lt | Z| \\lt 5 {{kpc}}, 6 kpc \\lt | Z| \\lt 9 kpc) can be decomposed into three Gaussians with peaks at [Fe/H] ≈ -0.6, -1.4, and -1.9, respectively, where the two lower peaks may provide evidence for a two-component model of the halo: the inner halo and the outer halo. The number ratio between the disk component and halo component(s) decreases with vertical distance from the Galactic plane, which is consistent with the previous literature.
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Walsh, Jon
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Favorite, J.A.
1999-09-01
In previous work, exponential convergence of Monte Carlo solutions using the reduced source method with Legendre expansion has been achieved only in one-dimensional rod and slab geometries. In this paper, the method is applied to three-dimensional (right parallelepiped) problems, with resulting evidence suggesting success. As implemented in this paper, the method approximates an angular integral of the flux with a discrete-ordinates numerical quadrature. It is possible that this approximation introduces an inconsistency that must be addressed.
NASA Technical Reports Server (NTRS)
Olynick, David P.; Hassan, H. A.; Moss, James N.
1988-01-01
A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.
Accuracy of Monte Carlo Criticality Calculations During BR2 Operation
Kalcheva, Silva; Koonen, Edgar; Ponsard, Bernard
2005-08-15
The Belgian Material Test Reactor BR2 is a strongly heterogeneous high-flux engineering test reactor at SCK-CEN (Centre d'Etude de l'Energie Nucleaire) in Mol with a thermal power of 60 to 100 MW. It deploys highly enriched uranium, water-cooled concentric plate fuel elements, positioned inside a beryllium reflector with a complex hyperboloid arrangement of test holes. The objective of this paper is to validate the MCNP and ORIGEN-S three-dimensional (3-D) model for reactivity predictions of the entire BR2 core during reactor operation. We employ the Monte Carlo code MCNP-4C to evaluate the effective multiplication factor k{sub eff} and 3-D space-dependent specific power distribution. The one-dimensional code ORIGEN-S is used to calculate the isotopic fuel depletion versus burnup and to prepare a database with depleted fuel compositions. The approach taken is to evaluate the 3-D power distribution at each time step and along with the database to evaluate the 3-D isotopic fuel depletion at the next step and to deduce the corresponding shim rod positions of the reactor operation. The capabilities of both codes are fully exploited without constraints on the number of involved isotope depletion chains or an increase of the computational time. The reactor has a complex operation, with important shutdowns between cycles, and its reactivity is strongly influenced by poisons, mainly {sup 3}He and {sup 6}Li from the beryllium reflector, and the burnable absorbers {sup 149}Sm and {sup 10}B in the fresh UAl{sub x} fuel. The computational predictions for the shim rod positions at various restarts are within 0.5 $ ({beta}{sub eff} = 0.0072)
NASA Astrophysics Data System (ADS)
Chen, Hsing-Ta; Cohen, Guy; Reichman, David R.
2017-02-01
In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.
Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Tan, Yi; Robinson, Allen L.; Presto, Albert A.
2014-12-01
Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual
Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method
2016-01-01
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709
Constant-pH Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulation Method.
Chen, Yunjie; Roux, Benoît
2015-08-11
A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys. 2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD-MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD-MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems.
NASA Astrophysics Data System (ADS)
Da, B.; Sun, Y.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.; Ding, Z. J.
2013-06-01
A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO2 in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.
Da, B.; Sun, Y.; Ding, Z. J.; Mao, S. F.; Zhang, Z. M.; Jin, H.; Yoshikawa, H.; Tanuma, S.
2013-06-07
A reverse Monte Carlo (RMC) method is developed to obtain the energy loss function (ELF) and optical constants from a measured reflection electron energy-loss spectroscopy (REELS) spectrum by an iterative Monte Carlo (MC) simulation procedure. The method combines the simulated annealing method, i.e., a Markov chain Monte Carlo (MCMC) sampling of oscillator parameters, surface and bulk excitation weighting factors, and band gap energy, with a conventional MC simulation of electron interaction with solids, which acts as a single step of MCMC sampling in this RMC method. To examine the reliability of this method, we have verified that the output data of the dielectric function are essentially independent of the initial values of the trial parameters, which is a basic property of a MCMC method. The optical constants derived for SiO{sub 2} in the energy loss range of 8-90 eV are in good agreement with other available data, and relevant bulk ELFs are checked by oscillator strength-sum and perfect-screening-sum rules. Our results show that the dielectric function can be obtained by the RMC method even with a wide range of initial trial parameters. The RMC method is thus a general and effective method for determining the optical properties of solids from REELS measurements.
Zhu, Caigang; Liu, Quan
2012-01-01
We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.
NASA Astrophysics Data System (ADS)
Du, Zhengchun; Zhu, Mengrui; Wu, Zhaoyong; Yang, Jianguo
2016-12-01
The uncertainty determination of the geometrical feature measurement for coordinate measuring machines (CMMs) is an essential part in the reliable quality control process. However, the most commonly-used methods for uncertainty assessment are difficult and require not only a large number of repeated measurements but also rich operation experience. Based on the error ellipse theory and the Monte Carlo simulation method, an uncertainty evaluation method for CMM measurements is presented. For circular features, the uncertainty evaluation model was established and extended into the use of an application of two holes’ central distance measurement through Monte Carlo Simulation. The verification experiment of the new method was conducted and results were compared with the traditional ones and they fit reasonably well, which proved the validity of the proposed method.
Avrorin, E. N.; Tsvetokhin, A. G.; Xenofontov, A. I.; Kourbatova, E. I.; Regens, J. L.
2002-02-26
This paper presents the results of an ongoing research and development project conducted by Russian institutions in Moscow and Snezhinsk, supported by the International Science and Technology Center (ISTC), in collaboration with the University of Oklahoma. The joint study focuses on developing and applying analytical tools to effectively characterize contaminant transport and assess risks associated with migration of radionuclides and heavy metals in the water column and sediments of large reservoirs or lakes. The analysis focuses on the development and evaluation of theoretical-computational models that describe the distribution of radioactive wastewater within a reservoir and characterize the associated radiation field as well as estimate doses received from radiation exposure. The analysis focuses on the development and evaluation of Monte Carlo-based, theoretical-computational methods that are applied to increase the precision of results and to reduce computing time for estimating the characteristics the radiation field emitted from the contaminated wastewater layer. The calculated migration of radionuclides is used to estimate distributions of radiation doses that could be received by an exposed population based on exposure to radionuclides from specified volumes of discrete aqueous sources. The calculated dose distributions can be used to support near-term and long-term decisions about priorities for environmental remediation and stewardship.
Applications of Monte Carlo methods for the analysis of MHTGR case of the VHTRC benchmark
Difilippo, F.C.
1994-03-01
Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanese (VHTRC) and Swiss (PROTEUS) facilities and an actual configuration of the PROTEUS Configuration 1 experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. The author has used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degrees}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.
Improving Bayesian analysis for LISA Pathfinder using an efficient Markov Chain Monte Carlo method
NASA Astrophysics Data System (ADS)
Ferraioli, Luigi; Porter, Edward K.; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Gibert, Ferran; Hewitson, Martin; Hueller, Mauro; Karnesis, Nikolaos; Korsakova, Natalia; Nofrarias, Miquel; Plagnol, Eric; Vitale, Stefano
2014-02-01
We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of the LISA Pathfinder satellite. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to LISA Pathfinder data. For this experiment, we return parameter values that are all within ˜1 σ of the injected values. When we analyse the accuracy of our parameter estimation in terms of the effect they have on the force-per-unit of mass noise, we find that the induced errors are three orders of magnitude less than the expected experimental uncertainty in the power spectral density.
NASA Astrophysics Data System (ADS)
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
Development of a software package for solid-angle calculations using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2016-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE in the 2 dimensional momentum space. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics
Hall, Howard L
2012-01-01
Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.
Kumar, A; Chauhan, S
2017-03-08
Obesity is one of the most provoking health burdens in the developed countries. One of the strategies to prevent obesity is the inhibition of pancreatic lipase enzyme. The aim of this study was to build QSAR models for natural lipase inhibitors by using the Monte Carlo method. The molecular structures were represented by the simplified molecular input line entry system (SMILES) notation and molecular graphs. Three sets - training, calibration and test set of three splits - were examined and validated. Statistical quality of all the described models was very good. The best QSAR model showed the following statistical parameters: r(2) = 0.864 and Q(2) = 0.836 for the test set and r(2) = 0.824 and Q(2) = 0.819 for the validation set. Structural attributes for increasing and decreasing the activity (expressed as pIC50) were also defined. Using defined structural attributes, the design of new potential lipase inhibitors is also presented. Additionally, a molecular docking study was performed for the determination of binding modes of designed molecules.
Monte carlo method-based QSAR modeling of penicillins binding to human serum proteins.
Veselinović, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikolić, Goran M; Veselinović, Aleksandar M
2015-01-01
The binding of penicillins to human serum proteins was modeled with optimal descriptors based on the Simplified Molecular Input-Line Entry System (SMILES). The concentrations of protein-bound drug for 87 penicillins expressed as percentage of the total plasma concentration were used as experimental data. The Monte Carlo method was used as a computational tool to build up the quantitative structure-activity relationship (QSAR) model for penicillins binding to plasma proteins. One random data split into training, test and validation set was examined. The calculated QSAR model had the following statistical parameters: r(2) = 0.8760, q(2) = 0.8665, s = 8.94 for the training set and r(2) = 0.9812, q(2) = 0.9753, s = 7.31 for the test set. For the validation set, the statistical parameters were r(2) = 0.727 and s = 12.52, but after removing the three worst outliers, the statistical parameters improved to r(2) = 0.921 and s = 7.18. SMILES-based molecular fragments (structural indicators) responsible for the increase and decrease of penicillins binding to plasma proteins were identified. The possibility of using these results for the computer-aided design of new penicillins with desired binding properties is presented.
Ergün, Ayla; Barbieri, Riccardo; Eden, Uri T; Wilson, Matthew A; Brown, Emery N
2007-03-01
The stochastic state point process filter (SSPPF) and steepest descent point process filter (SDPPF) are adaptive filter algorithms for state estimation from point process observations that have been used to track neural receptive field plasticity and to decode the representations of biological signals in ensemble neural spiking activity. The SSPPF and SDPPF are constructed using, respectively, Gaussian and steepest descent approximations to the standard Bayes and Chapman-Kolmogorov (BCK) system of filter equations. To extend these approaches for constructing point process adaptive filters, we develop sequential Monte Carlo (SMC) approximations to the BCK equations in which the SSPPF and SDPPF serve as the proposal densities. We term the two new SMC point process filters SMC-PPFs and SMC-PPFD, respectively. We illustrate the new filter algorithms by decoding the wind stimulus magnitude from simulated neural spiking activity in the cricket cercal system. The SMC-PPFs and SMC-PPFD provide more accurate state estimates at low number of particles than a conventional bootstrap SMC filter algorithm in which the state transition probability density is the proposal density. We also use the SMC-PPFs algorithm to track the temporal evolution of a spatial receptive field of a rat hippocampal neuron recorded while the animal foraged in an open environment. Our results suggest an approach for constructing point process adaptive filters using SMC methods.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakage frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.
Dynamic measurements and uncertainty estimation of clinical thermometers using Monte Carlo method
NASA Astrophysics Data System (ADS)
Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko
2016-09-01
Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for dynamic measurement uncertainty evaluation in order to examine the requirements for clinical thermometer dynamic properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer dynamic properties. The thermometers were mathematically modelled. A Monte Carlo method was implemented for dynamic measurement uncertainty evaluation. The measurement uncertainty was analysed for static and dynamic conditions. Results showed that dynamic uncertainty is much larger than steady-state uncertainty. The results of dynamic uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for dynamic evaluation of clinical thermometers for continuous measurement, while dynamic measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer dynamic properties had a significant impact on the measurement result. Estimation of dynamic uncertainty is crucial for the assurance of traceable and comparable measurements.
NASA Astrophysics Data System (ADS)
Agudelo-Giraldo, J. D.; Restrepo-Parra, E.; Restrepo, J.
2015-10-01
The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La2/3Ca1/3MnO3, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x-y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn3+eg'-O-Mn3+eg, Mn3+eg-O-Mn4+d3 and Mn3+eg'-O-Mn4+d3. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions TC (Curie temperature) and TMI (metal-insulator temperature) are similar, whereas with the increase in the vacancy percentage, TMI presented lower values than TC. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below TMI. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below TC.
Živković, Jelena V; Trutić, Nataša V; Veselinović, Jovana B; Nikolić, Goran M; Veselinović, Aleksandar M
2015-09-01
The Monte Carlo method was used for QSAR modeling of maleimide derivatives as glycogen synthase kinase-3β inhibitors. The first QSAR model was developed for a series of 74 3-anilino-4-arylmaleimide derivatives. The second QSAR model was developed for a series of 177 maleimide derivatives. QSAR models were calculated with the representation of the molecular structure by the simplified molecular input-line entry system. Two splits have been examined: one split into the training and test set for the first QSAR model, and one split into the training, test and validation set for the second. The statistical quality of the developed model is very good. The calculated model for 3-anilino-4-arylmaleimide derivatives had following statistical parameters: r(2)=0.8617 for the training set; r(2)=0.8659, and r(m)(2)=0.7361 for the test set. The calculated model for maleimide derivatives had following statistical parameters: r(2)=0.9435, for the training, r(2)=0.9262 and r(m)(2)=0.8199 for the test and r(2)=0.8418, r(av)(m)(2)=0.7469 and ∆r(m)(2)=0.1476 for the validation set. Structural indicators considered as molecular fragments responsible for the increase and decrease in the inhibition activity have been defined. The computer-aided design of new potential glycogen synthase kinase-3β inhibitors has been presented by using defined structural alerts.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less
Assessment of the Contrast to Noise Ratio in PET Scanners with Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess the contrast to noise ratio (CNR) of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The PET scanner simulated was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution. Image quality was assessed in terms of the CNR. CNR was estimated from coronal reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL. OSMAPOSL reconstruction was assessed by using various subsets (3, 15 and 21) and various iterations (2 to 20). CNR values were found to decrease when both iterations and subsets increase. Two (2) iterations were found to be optimal. The simulated PET evaluation method, based on the TLC plane source, can be useful in image quality assessment of PET scanners.
Method for Performing an Efficient Monte Carlo Simulation of Lipid Mixtures on a Concurrent Computer
NASA Astrophysics Data System (ADS)
Moore, Andrew; Huang, Juyang; Gibson, Thomas
2003-10-01
We are interested in performing extensive Monte Carlo simulations of lipid mixtures in cell membranes. These computations will be performed on a Gnu/Linux Beowulf cluster using the industry-standard Message Passing Interface (MPI) for handling node-to-node communication and overall program management. Devising an efficient parallel decomposition of the simulation is crucial for success. The goal is to balance the load on the compute nodes so that each does the same amount of work and to minimize the amount of (relatively slow) node-to-node communication. To this end, we report a method for performing simulations on a boundless three-dimensional surface. The surface is modeled by a two-dimensional array which can represent either a rectangular or triangular lattice. The array is distributed evenly across multiple processors in a block-row configuration. The sequence of calculations minimizes the delay from passing messages between nodes and uses the delay that does exist to perform local operations on each node.
Shepherd, James J; Booth, George H; Alavi, Ali
2012-06-28
Using the homogeneous electron gas (HEG) as a model, we investigate the sources of error in the "initiator" adaptation to full configuration interaction quantum Monte Carlo (i-FCIQMC), with a view to accelerating convergence. In particular, we find that the fixed-shift phase, where the walker number is allowed to grow slowly, can be used to effectively assess stochastic and initiator error. Using this approach we provide simple explanations for the internal parameters of an i-FCIQMC simulation. We exploit the consistent basis sets and adjustable correlation strength of the HEG to analyze properties of the algorithm, and present finite basis benchmark energies for N = 14 over a range of densities 0.5 ≤ r(s) ≤ 5.0 a.u. A single-point extrapolation scheme is introduced to produce complete basis energies for 14, 38, and 54 electrons. It is empirically found that, in the weakly correlated regime, the computational cost scales linearly with the plane wave basis set size, which is justifiable on physical grounds. We expect the fixed-shift strategy to reduce the computational cost of many i-FCIQMC calculations of weakly correlated systems. In addition, we provide benchmarks for the electron gas, to be used by other quantum chemical methods in exploring periodic solid state systems.
[Study on optical energy transmission in biotic tissues by Monte Carlo method].
Ren, Xiaonan; Wei, Shoushui; Yang, Xianzhang; Gao, Di
2010-06-01
Biotic tissues are a kind of highly scattering random media; studies on laser light propagation in biotic tissues play an important role in bio-medical diagnostics and therapeutics. The propagation and distribution of infinitely narrow photon beam in tissues are simulated by Monte Carlo method in this paper. Also presented are the energy distribution with regard to depths, light distribution in tissues, reflection and transmittance on the upper and lower surface. The optical parameters adopted in this study are g, albedo and microa, which have influence on energy distribution. The results show: The energy distribution decreases more quickly with the increase of depths and reveals a peak value close to the surface; g factor plays an important part in the lost energy on the upper surface and lower surface; the decrease of g factor causes weaking of the forward moving ability, so the penetration depth becomes smaller and the energy becomes dispersives variation of albedo has distinct effect on the shallow and deep tissues.
Evaluation of Uncertainties in βeff by Means of Deterministic and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Kodeli, I.; Zwermann, W.
2014-04-01
Due to the influence of delayed neutrons on the reactor dynamics an accurate estimation of the effective delayed neutron fraction (βeff), as well as good understanding of the corresponding uncertainty, is essential for reactor safety analysis. This paper presents the βeff sensitivity and uncertainty analysis based on the derivation of Bretscher's prompt k-ratio equation. Performance of both deterministic (SUSD3D generalized perturbation code) and Monte Carlo (XSUSA random sampling code) methods applied with the multi-group neutron transport codes XSDRN and DANTSYS were compared on a series of ICSBEP critical benchmarks. Using the JENDL-4.0m and SCALE-6.0 covariance matrices the typical βeff uncertainty was found to be around 3-4% and is generally dominated by the uncertainty of delayed nu-bar; depending on the considered assembly, the nu-prompt, inelastic, elastic and fission cross-section uncertainties may also significantly contribute to the overall uncertainty. The βeff measurements in combination with the sensitivity and uncertainty analysis can therefore be exploited for validation of nuclear cross sections, such as delayed fission yields, 238U elastic and inelastic cross section, complementing thus the information obtained from the keff measurements.
Fetal dose assessment from invasive special procedures by Monte Carlo methods.
Metzger, R L; Van Riper, K A
1999-08-01
The assessment of fetal dose from a special procedure in the clinical environment is difficult as patient size, fluoroscopic beam motion, and imaging sequences vary significantly from study to study. Fetal dose is particularly difficult to estimate when the fetus is exposed partially or totally to scatter radiation from images taken in other locations of the mother's body. A method to reliably estimate fetal dose has been developed by using template based input files for the Monte Carlo radiation transport code MCNP. Female patient phantoms at 0, 3, 6, and 9 months of pregnancy and source terms for common diagnostic tube potentials are used to rapidly build an input file for MCNP. The phantoms can be easily modified to fit patient shape. The geometry and beam location for each type of image acquired (i.e. fluoroscopy, spot filming, etc.) is verified by the use of a 3D visualization code (Sabrina). MCNP is then run to estimate the dose to the embryo/fetus and the exposure to skin entrance (ESE) for the beam being modeled. The actual ESE for the beam is then measured with ion chambers and the fetal dose is determined from the MCNP supplied ratio of ESE to fetal dose. Runs are made for each type of imaging and the doses are summed for the total fetal dose. For most procedures, the method can provide an estimate of the fetal dose within one day of the study. The method can also be used to prospectively model a study in order to choose imaging sequences that will minimize fetal dose.
Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling.
Kraan, Aafke Christine
2015-01-01
Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β (+) emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.
Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling
Kraan, Aafke Christine
2015-01-01
Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586
Towards prediction of correlated material properties using quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Wagner, Lucas
Correlated electron systems offer a richness of physics far beyond noninteracting systems. If we would like to pursue the dream of designer correlated materials, or, even to set a more modest goal, to explain in detail the properties and effective physics of known materials, then accurate simulation methods are required. Using modern computational resources, quantum Monte Carlo (QMC) techniques offer a way to directly simulate electron correlations. I will show some recent results on a few extremely challenging materials including the metal-insulator transition of VO2, the ground state of the doped cuprates, and the pressure dependence of magnetic properties in FeSe. By using a relatively simple implementation of QMC, at least some properties of these materials can be described truly from first principles, without any adjustable parameters. Using the QMC platform, we have developed a way of systematically deriving effective lattice models from the simulation. This procedure is particularly attractive for correlated electron systems because the QMC methods treat the one-body and many-body components of the wave function and Hamiltonian on completely equal footing. I will show some examples of using this downfolding technique and the high accuracy of QMC to connect our intuitive ideas about interacting electron systems with high fidelity simulations. The work in this presentation was supported in part by NSF DMR 1206242, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number FG02-12ER46875, and the Center for Emergent Superconductivity, Department of Energy Frontier Research Center under Grant No. DEAC0298CH1088. Computing resources were provided by a Blue Waters Illinois grant and INCITE PhotSuper and SuperMatSim allocations.
Monte Carlo particle-in-cell methods for the simulation of the Vlasov-Maxwell gyrokinetic equations
NASA Astrophysics Data System (ADS)
Bottino, A.; Sonnendrücker, E.
2015-10-01
> The particle-in-cell (PIC) algorithm is the most popular method for the discretisation of the general 6D Vlasov-Maxwell problem and it is widely used also for the simulation of the 5D gyrokinetic equations. The method consists of coupling a particle-based algorithm for the Vlasov equation with a grid-based method for the computation of the self-consistent electromagnetic fields. In this review we derive a Monte Carlo PIC finite-element model starting from a gyrokinetic discrete Lagrangian. The variations of the Lagrangian are used to obtain the time-continuous equations of motion for the particles and the finite-element approximation of the field equations. The Noether theorem for the semi-discretised system implies a certain number of conservation properties for the final set of equations. Moreover, the PIC method can be interpreted as a probabilistic Monte Carlo like method, consisting of calculating integrals of the continuous distribution function using a finite set of discrete markers. The nonlinear interactions along with numerical errors introduce random effects after some time. Therefore, the same tools for error analysis and error reduction used in Monte Carlo numerical methods can be applied to PIC simulations.
Forward treatment planning for modulated electron radiotherapy (MERT) employing Monte Carlo methods
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Lössl, K.; Aebersold, D. M.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-03-15
Purpose: This paper describes the development of a forward planning process for modulated electron radiotherapy (MERT). The approach is based on a previously developed electron beam model used to calculate dose distributions of electron beams shaped by a photon multi leaf collimator (pMLC). Methods: As the electron beam model has already been implemented into the Swiss Monte Carlo Plan environment, the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) can be included in the planning process for MERT. In a first step, CT data are imported into Eclipse and a pMLC shaped electron beam is set up. This initial electron beam is then divided into segments, with the electron energy in each segment chosen according to the distal depth of the planning target volume (PTV) in beam direction. In order to improve the homogeneity of the dose distribution in the PTV, a feathering process (Gaussian edge feathering) is launched, which results in a number of feathered segments. For each of these segments a dose calculation is performed employing the in-house developed electron beam model along with the macro Monte Carlo dose calculation algorithm. Finally, an automated weight optimization of all segments is carried out and the total dose distribution is read back into Eclipse for display and evaluation. One academic and two clinical situations are investigated for possible benefits of MERT treatment compared to standard treatments performed in our clinics and treatment with a bolus electron conformal (BolusECT) method. Results: The MERT treatment plan of the academic case was superior to the standard single segment electron treatment plan in terms of organs at risk (OAR) sparing. Further, a comparison between an unfeathered and a feathered MERT plan showed better PTV coverage and homogeneity for the feathered plan, with V{sub 95%} increased from 90% to 96% and V{sub 107%} decreased from 8% to nearly 0%. For a clinical breast boost irradiation, the MERT plan
Watanabe, Hiroshi; Yukawa, Satoshi; Novotny, M A; Ito, Nobuyasu
2006-08-01
We construct asymptotic arguments for the relative efficiency of rejection-free Monte Carlo (MC) methods compared to the standard MC method. We find that the efficiency is proportional to exp(constbeta) in the Ising, sqrt[beta] in the classical XY, and beta in the classical Heisenberg spin systems with inverse temperature beta, regardless of the dimension. The efficiency in hard particle systems is also obtained, and found to be proportional to (rho(cp)-rho)(-d) with the closest packing density rho(cp), density rho, and dimension d of the systems. We construct and implement a rejection-free Monte Carlo method for the hard-disk system. The RFMC has a greater computational efficiency at high densities, and the density dependence of the efficiency is as predicted by our arguments.
Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues
Harris, G.; Van Horn, R.
1996-06-01
The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL.
A Monte Carlo Method for Making the SDSS u-Band Magnitude More Accurate
NASA Astrophysics Data System (ADS)
Gu, Jiayin; Du, Cuihua; Zuo, Wenbo; Jing, Yingjie; Wu, Zhenyu; Ma, Jun; Zhou, Xu
2016-10-01
We develop a new Monte Carlo-based method to convert the Sloan Digital Sky Survey (SDSS) u-band magnitude to the south Galactic Cap of the u-band Sky Survey (SCUSS) u-band magnitude. Due to the increased accuracy of SCUSS u-band measurements, the converted u-band magnitude becomes more accurate compared with the original SDSS u-band magnitude, in particular at the faint end. The average u-magnitude error (for both SDSS and SCUSS) of numerous main-sequence stars with 0.2\\lt g-r\\lt 0.8 increases as the g-band magnitude becomes fainter. When g = 19.5, the average magnitude error of the SDSS u is 0.11. When g = 20.5, the average SDSS u error rises to 0.22. However, at this magnitude, the average magnitude error of the SCUSS u is just half as much as that of the SDSS u. The SDSS u-band magnitudes of main-sequence stars with 0.2\\lt g-r\\lt 0.8 and 18.5\\lt g\\lt 20.5 are converted, therefore the maximum average error of the converted u-band magnitudes is 0.11. The potential application of this conversion is to derive a more accurate photometric metallicity calibration from SDSS observations, especially for the more distant stars. Thus, we can explore stellar metallicity distributions either in the Galactic halo or some stream stars.
Numerical simulations of blast-impact problems using the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Sharma, Anupam
There is an increasing need to design protective structures that can withstand or mitigate the impulsive loading due to the impact of a blast or a shock wave. A preliminary step in designing such structures is the prediction of the pressure loading on the structure. This is called the "load definition." This thesis is focused on a numerical approach to predict the load definition on arbitrary geometries for a given strength of the incident blast/shock wave. A particle approach, namely the Direct Simulation Monte Carlo (DSMC) method, is used as the numerical model. A three-dimensional, time-accurate DSMC flow solver is developed as a part of this study. Embedded surfaces, modeled as triangulations, are used to represent arbitrary-shaped structures. Several techniques to improve the computational efficiency of the algorithm of particle-structure interaction are presented. The code is designed using the Object Oriented Programming (OOP) paradigm. Domain decomposition with message passing is used to solve large problems in parallel. The solver is extensively validated against analytical results and against experiments. Two kinds of geometries, a box and an I-shaped beam are investigated for blast impact. These simulations are performed in both two- and three-dimensions. A major portion of the thesis is dedicated to studying the uncoupled fluid dynamics problem where the structure is assumed to remain stationary and intact during the simulation. A coupled, fluid-structure dynamics problem is solved in one spatial dimension using a simple, spring-mass-damper system to model the dynamics of the structure. A parametric study, by varying the mass, spring constant, and the damping coefficient, to study their effect on the loading and the displacement of the structure is also performed. Finally, the parallel performance of the solver is reported for three sample-size problems on two Beowulf clusters.
Cho, S; Shin, E H; Kim, J; Ahn, S H; Chung, K; Kim, D-H; Han, Y; Choi, D H
2015-06-15
Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using the production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.
Neutrino transport in type II supernovae: Boltzmann solver vs. Monte Carlo method
NASA Astrophysics Data System (ADS)
Yamada, Shoichi; Janka, Hans-Thomas; Suzuki, Hideyuki
1999-04-01
We have coded a Boltzmann solver based on a finite difference scheme (S_N method) aiming at calculations of neutrino transport in type II supernovae. Close comparison between the Boltzmann solver and a Monte Carlo transport code has been made for realistic atmospheres of post bounce core models under the assumption of a static background. We have also investigated in detail the dependence of the results on the numbers of radial, angular, and energy grid points and the way to discretize the spatial advection term which is used in the Boltzmann solver. A general relativistic calculation has been done for one of the models. We find good overall agreement between the two methods. This gives credibility to both methods which are based on completely different formulations. In particular, the number and energy fluxes and the mean energies of the neutrinos show remarkably good agreement, because these quantities are determined in a region where the angular distribution of the neutrinos is nearly isotropic and they are essentially frozen in later on. On the other hand, because of a relatively small number of angular grid points (which is inevitable due to limitations of the computation time) the Boltzmann solver tends to slightly underestimate the flux factor and the Eddington factor outside the (mean) ``neutrinosphere'' where the angular distribution of the neutrinos becomes highly anisotropic. As a result, the neutrino number (and energy) density is somewhat overestimated in this region. This fact suggests that the Boltzmann solver should be applied to calculations of the neutrino heating in the hot-bubble region with some caution because there might be a tendency to overestimate the energy deposition rate in disadvantageous situations. A comparison shows that this trend is opposite to the results obtained with a multi-group flux-limited diffusion approximation of neutrino transport. Employing three different flux limiters, we find that all of them lead to a significant
Spray cooling simulation implementing time scale analysis and the Monte Carlo method
NASA Astrophysics Data System (ADS)
Kreitzer, Paul Joseph
Spray cooling research is advancing the field of heat transfer and heat rejection in high power electronics. Smaller and more capable electronics packages are producing higher amounts of waste heat, along with smaller external surface areas, and the use of active cooling is becoming a necessity. Spray cooling has shown extremely high levels of heat rejection, of up to 1000 W/cm 2 using water. Simulations of spray cooling are becoming more realistic, but this comes at a price. A previous researcher has used CFD to successfully model a single 3D droplet impact into a liquid film using the level set method. However, the complicated multiphysics occurring during spray impingement and surface interactions increases computation time to more than 30 days. Parallel processing on a 32 processor system has reduced this time tremendously, but still requires more than a day. The present work uses experimental and computational results in addition to numerical correlations representing the physics occurring on a heated impingement surface. The current model represents the spray behavior of a Spraying Systems FullJet 1/8-g spray nozzle. Typical spray characteristics are indicated as follows: flow rate of 1.05x10-5 m3/s, normal droplet velocity of 12 m/s, droplet Sauter mean diameter of 48 microm, and heat flux values ranging from approximately 50--100 W/cm2 . This produces non-dimensional numbers of: We 300--1350, Re 750--3500, Oh 0.01--0.025. Numerical and experimental correlations have been identified representing crater formation, splashing, film thickness, droplet size, and spatial flux distributions. A combination of these methods has resulted in a Monte Carlo spray impingement simulation model capable of simulating hundreds of thousands of droplet impingements or approximately one millisecond. A random sequence of droplet impingement locations and diameters is generated, with the proper radial spatial distribution and diameter distribution. Hence the impingement, lifetime
A study of the XY model by the Monte Carlo method
NASA Technical Reports Server (NTRS)
Suranyi, Peter; Harten, Paul
1987-01-01
The massively parallel processor is used to perform Monte Carlo simulations for the two dimensional XY model on lattices of sizes up to 128 x 128. A parallel random number generator was constructed, finite size effects were studied, and run times were compared with those on a CRAY X-MP supercomputer.
Monte Carlo simulation of air sampling methods for the measurement of radon decay products.
Sima, Octavian; Luca, Aurelian; Sahagia, Maria
2017-02-21
A stochastic model of the processes involved in the measurement of the activity of the (222)Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the (222)Rn decay products concentrations in the air are realistically evaluated.
A Monte Carlo Comparison of Parametric and Nonparametric Polytomous DIF Detection Methods.
ERIC Educational Resources Information Center
Bolt, Daniel M.
2002-01-01
Compared two parametric procedures for detecting differential item functioning (DIF) using the graded response model (GRM), the GRM-likelihood ratio test and the GRM-differential functioning of items and tests, with a nonparametric DIF detection procedure, Poly-SIBTEST. Monte Carlo simulation results show that Poly-SIBTEST showed the least amount…
NASA Astrophysics Data System (ADS)
Makarevich, K. O.; Minenko, V. F.; Verenich, K. A.; Kuten, S. A.
2016-05-01
This work is dedicated to modeling dental radiographic examinations to assess the absorbed doses of patients and effective doses. For simulating X-ray spectra, the TASMIP empirical model is used. Doses are assessed on the basis of the Monte Carlo method by using MCNP code for voxel phantoms of ICRP. The results of the assessment of doses to individual organs and effective doses for different types of dental examinations and features of X-ray tube are presented.
Xu, Feng; Davis, Anthony B; West, Robert A; Esposito, Larry W
2011-01-17
Building on the Markov chain formalism for scalar (intensity only) radiative transfer, this paper formulates the solution to polarized diffuse reflection from and transmission through a vertically inhomogeneous atmosphere. For verification, numerical results are compared to those obtained by the Monte Carlo method, showing deviations less than 1% when 90 streams are used to compute the radiation from two types of atmospheres, pure Rayleigh and Rayleigh plus aerosol, when they are divided into sublayers of optical thicknesses of less than 0.03.
Monte-Carlo methods for chemical-mechanical planarization on multiple-layer and dual-material models
NASA Astrophysics Data System (ADS)
Chen, Yu; Kahng, Andrew B.; Robins, Gabriel; Zelikovsky, Alexander
2002-07-01
Chemical-mechanical planarization (CMP) and other manufacturing steps in very deep submicron VLSI have varying effects on device and interconnect features, depending on the local layout density. To improve manufacturability and performance predictability, we seek to make a layout uniform with respect to prescribed density criteria, by inserting area fill geometries in to the layout. We review previous research on single-layer fill for flat and hierarchical layout density control based on the Interlevel Dielectric CMP model. We also describe the recent combination of CMP physical modeling and linear programing for multiple-layer density control, as well as the Shallow Trench Isolation CMP model. Our work makes the following contributions for the Multiple-layer Interlevel Dielectric CMP model. First, we propose a new linear programming approach with a new objective for the multiple-layer fill problem. Second, we describe modified Monte-Carlo approaches for the multiple- layer fill problem. Comparisons with previous approaches show that the new linear programming method is more reasonable for manufacturability, and that the Monte-Carlo approach is efficient and yields more accurate results for large layouts. The CMP step in Shallow Trench Isolation (STI) is a dual-material polishing process, i.e., multiple materials are being polished simultaneously during the CMP process. Simple greedy methods were proposed for the non- linear problem with Min-Var and Min-Fill objectives, where the certain amount of dummy features are always added at a position with the smallest density. In this paper, we propose more efficient Monte-Carlo methods for the Min-Var objective, as well a improved Greedy and Monte-Carlo methods for the Min-Fill objective. Our experimental experience shows that they can get better solutions with respect to the objectives.
NASA Astrophysics Data System (ADS)
Bogdanov, Yu I.
2007-12-01
A new method of statistical simulation of quantum systems is presented which is based on the generation of data by the Monte Carlo method and their purposeful tomography with the energy minimisation. The numerical solution of the problem is based on the optimisation of the target functional providing a compromise between the maximisation of the statistical likelihood function and the energy minimisation. The method does not involve complicated and ill-posed multidimensional computational procedures and can be used to calculate the wave functions and energies of the ground and excited stationary sates of complex quantum systems. The applications of the method are illustrated.
Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N
2015-09-01
Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications.
Alrefae, T
2014-12-01
A simple method of efficiency calibration for gamma spectrometry was performed. This method, which focused on measuring the radioactivity of (137)Cs in food samples, was based on Monte Carlo simulations available in the free-of-charge toolkit GEANT4. Experimentally, the efficiency values of a high-purity germanium detector were calculated for three reference materials representing three different food items. These efficiency values were compared with their counterparts produced by a computer code that simulated experimental conditions. Interestingly, the output of the simulation code was in acceptable agreement with the experimental findings, thus validating the proposed method.
NASA Astrophysics Data System (ADS)
Brown, David F. R.; Gibbs, Mark N.; Clary, David C.
1996-11-01
We describe a new method to calculate the vibrational ground state properties of weakly bound molecular systems and apply it to (HF)2 and HF-HCl. A Bayesian Inference neural network is used to fit an analytic function to a set of ab initio data points, which may then be employed by the quantum diffusion Monte Carlo method to produce ground state vibrational wave functions and properties. The method is general and relatively simple to implement and will be attractive for calculations on systems for which no analytic potential energy surface exists.
Current impulse response of thin InP p+-i-n+ diodes using full band structure Monte Carlo method
NASA Astrophysics Data System (ADS)
You, A. H.; Cheang, P. L.
2007-02-01
A random response time model to compute the statistics of the avalanche buildup time of double-carrier multiplication in avalanche photodiodes (APDs) using full band structure Monte Carlo (FBMC) method is discussed. The effect of feedback impact ionization process and the dead-space effect on random response time are included in order to simulate the speed of APD. The time response of InP p+-i-n+ diodes with the multiplication region of 0.2μm is presented. Finally, the FBMC model is used to calculate the current impulse response of the thin InP p+-i-n+ diodes with multiplication lengths of 0.05 and 0.2μm using Ramo's theorem [Proc. IRE 27, 584 (1939)]. The simulated current impulse response of the FBMC model is compared to the results simulated from a simple Monte Carlo model.
Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J
2016-08-25
We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison.
Development of CT scanner models for patient organ dose calculations using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Gu, Jianwei
There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations
Van Siclen, Clinton D
2007-02-01
A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
Single-cluster-update Monte Carlo method for the random anisotropy model
NASA Astrophysics Data System (ADS)
Rößler, U. K.
1999-06-01
A Wolff-type cluster Monte Carlo algorithm for random magnetic models is presented. The algorithm is demonstrated to reduce significantly the critical slowing down for planar random anisotropy models with weak anisotropy strength. Dynamic exponents z<~1.0 of best cluster algorithms are estimated for models with ratio of anisotropy to exchange constant D/J=1.0 on cubic lattices in three dimensions. For these models, critical exponents are derived from a finite-size scaling analysis.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
A general method to derive tissue parameters for Monte Carlo dose calculation with multi-energy CT.
Lalonde, Arthur; Bouchard, Hugo
2016-11-21
To develop a general method for human tissue characterization with dual- and multi-energy CT and evaluate its performance in determining elemental compositions and quantities relevant to radiotherapy Monte Carlo dose calculation. Ideal materials to describe human tissue are obtained applying principal component analysis on elemental weight and density data available in literature. The theory is adapted to elemental composition for solving tissue information from CT data. A novel stoichiometric calibration method is integrated to the technique to make it suitable for a clinical environment. The performance of the method is compared with two techniques known in literature using theoretical CT data. In determining elemental weights with dual-energy CT, the method is shown to be systematically superior to the water-lipid-protein material decomposition and comparable to the parameterization technique. In determining proton stopping powers and energy absorption coefficients with dual-energy CT, the method generally shows better accuracy and unbiased results. The generality of the method is demonstrated simulating multi-energy CT data to show the potential to extract more information with multiple energies. The method proposed in this paper shows good performance to determine elemental compositions from dual-energy CT data and physical quantities relevant to radiotherapy dose calculation. The method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.
A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.
2014-06-01
Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.
NASA Astrophysics Data System (ADS)
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Mizutani, S; Takada, Y; Kohno, R; Hotta, K; Akimoto, T
2015-06-15
Purpose: A simplified Monte Carlo (SMC) method has been developed to obtain fast and accurate dose calculation in heterogeneous tissues to improve the accuracy of dose calculation. We have applied the SMC method to calculation of dose kernels for the pencil beam scanning. While the SMC method tracks individual primary protons in medium, it simplifies the dose calculation by using a measured depth-dose distribution instead of considering nuclear reactions and tracking secondary particles. To verify the accuracy of SMC calculation, we compared a dose-calculation Result using the SMC method with that using the Full Monte Carlo (FMC) method in an inhomogeneous phantom. Methods: As a model of the inhomogeneous media, we considered a phantom complied of a rectangular acrylic block of size 150*300*200 mm {sup 3} immersed in water within a virtual container with a size of 300*300*400 mm{sup 3}. Results: We found excellent agreement of overall dose distributions between the SMC and FMC methods. As for the laterally integrated depth-dose distributions, slight difference was found in front of the second Bragg-Peak between the two algorithms. While it took 25717.0 s on 2.7 GHz Intel Core i7 for the FMC method to complete the calculation, it took 15.4 s for the SMC method to complete it. The SMC method can calculate the dose distribution approximately 1675 times faster than the FMC method. Conclusion: The dose distribution obtained by SMC method agreed well that obtained by the FMC method in a simple inhomogeneous phantom. In contrast, calculation time by the SMC method has been reduced by three orders in magnitude compared with the FMC method.
Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Chen, J.; Hubbard, S.; Rubin, Y.; Murray, C.; Roden, E.; Majer, E.
2002-12-01
if they were available from direct measurements or as variables otherwise. To estimate the geochemical parameters, we first assigned a prior model for each variable and a likelihood model for each type of data, which together define posterior probability distributions for each variable on the domain. Since the posterior probability distribution may involve hundreds of variables, we used a Markov Chain Monte Carlo (MCMC) method to explore each variable by generating and subsequently evaluating hundreds of realizations. Results from this case study showed that although geophysical attributes are not necessarily directly related to geochemical parameters, geophysical data could be very useful for providing accurate and high-resolution information about geochemical parameter distribution through their joint and indirect connections with hydrogeological properties such as lithofacies. This case study also demonstrated that MCMC methods were particularly useful for geochemical parameter estimation using geophysical data because they allow incorporation into the procedure of spatial correlation information, measurement errors, and cross correlations among different types of parameters.
1993-09-01
This study refines risk analysis procedures for trichloroethylene (TCE) using a physiologically based pharmacokinetic (PBPK) model in conjunction...promulgate, and better present, more realistic standards.... Risk analysis , Physiologically based pharmacokinetics, Pbpk, Trichloroethylene, Monte carlo method.
Calculation of images from an anthropomorphic chest phantom using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ullman, Gustaf; Malusek, Alexandr; Sandborg, Michael; Dance, David R.; Alm Carlsson, Gudrun
2006-03-01
Monte Carlo (MC) computer simulation of chest x-ray imaging systems has hitherto been performed using anthropomorphic phantoms with too large (3 mm) voxel sizes. The aim for this work was to develop and use a Monte Carlo computer program to compute projection x-ray images of a high-resolution anthropomorphic voxel phantom for visual clinical image quality evaluation and dose-optimization. An Alderson anthropomorphic chest phantom was imaged in a CT-scanner and reconstructed with isotropic voxels of 0.7 mm. The phantom was segmented and included in a Monte Carlo computer program using the collision density estimator to derive the energies imparted to the detector per unit area of each pixel by scattered photons. The image due to primary photons was calculated analytically including a pre-calculated detector response function. Attenuation and scatter of x-rays in the phantom, grid and image detector was considered. Imaging conditions (tube voltage, anti-scatter device) were varied and the images compared to a real computed radiography (Fuji FCR 9501) image. Four imaging systems were simulated (two tube voltages 81 kV and 141 kV using either a grid with ratio 10 or a 30 cm air gap). The effect of scattered radiation on the visibility of thoracic vertebrae against the heart and lungs is demonstrated. The simplicity in changing the imaging conditions will allow us not only to produce images of existing imaging systems, but also of hypothetical, future imaging systems. We conclude that the calculated images of the high-resolution voxel phantom are suitable for human detection experiments of low-contrast lesions.
Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo
2016-03-01
Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PACS number(s): 87.55.Gh.
Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo
2016-03-08
Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine.
Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.
NASA Astrophysics Data System (ADS)
Sadovich, Sergey; Talamo, A.; Burnos, V.; Kiyavitskaya, H.; Fokov, Yu.
2014-06-01
In subcritical systems driven by an external neutron source, the experimental methods based on pulsed neutron source and statistical techniques play an important role for reactivity measurement. Simulation of these methods is very time-consumed procedure. For simulations in Monte-Carlo programs several improvements for neutronic calculations have been made. This paper introduces a new method for simulation PNS and statistical measurements. In this method all events occurred in the detector during simulation are stored in a file using PTRAC feature in the MCNP. After that with a special code (or post-processing) PNS and statistical methods can be simulated. Additionally different shapes of neutron pulses and its lengths as well as dead time of detectors can be included into simulation. The methods described above were tested on subcritical assembly Yalina-Thermal, located in Joint Institute for Power and Nuclear Research SOSNY, Minsk, Belarus. A good agreement between experimental and simulated results was shown.
Prediction of rocket plume radiative heating using backward Monte-Carlo method
NASA Technical Reports Server (NTRS)
Wang, K. C.
1993-01-01
A backward Monte-Carlo plume radiation code has been developed to predict rocket plume radiative heating to the rocket base region. This paper provides a description of this code and provides sample results. The code was used to predict radiative heating to various locations during test firings of 48-inch solid rocket motors at NASA Marshall Space Flight Center. Comparisons with test measurements are provided. Predictions of full scale sea level Redesigned Solid Rocket Motor (RSRM) and Advanced Solid Rocket Motor (ASRM) plume radiative heating to the Space Shuttle external tank (ET) dome center were also made. A comparison with the Development Flight Instrumentation (DFI) measurements is also provided.
Two active-electron classical trajectory Monte Carlo methods for ion-He collisions
Guzman, F.; Errea, L. F.; Pons, B.
2009-10-15
We introduce two active-electron classical trajectory Monte Carlo models for ion-He collisions, in which the electron-electron force is smoothed using a Gaussian kernel approximation for the pointwise classical particles. A first model uses independent pairs of Gaussian electrons, while a second one employs time-dependent mean-field theory to define an averaged electron-electron repulsion force. These models are implemented for prototypical p+He collisions and the results are compared to available experimental and theoretical data.
NASA Astrophysics Data System (ADS)
Nourazar, S. S.; Jahangiri, P.; Aboutalebi, A.; Ganjaei, A. A.; Nourazar, M.; Khadem, J.
2011-06-01
The effect of new terms in the improved algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, is investigated by simulating a rarefied binary gas mixture flow inside a rotating cylinder. Dalton law for the partial pressures contributed by each species of the binary gas mixture is incorporated into our simulation using the MDSMC method and the direct simulation Monte-Carlo (DSMC) method. Moreover, the effect of the exponent of the cosine of deflection angle (α) in the inter-molecular collision models, the variable soft sphere (VSS) and the variable hard sphere (VHS), is investigated in our simulation. The improvement of the results of simulation is pronounced using the MDSMC method when compared with the results of the DSMC method. The results of simulation using the VSS model show some improvements on the result of simulation for the mixture temperature at radial distances close to the cylinder wall where the temperature reaches the maximum value when compared with the results using the VHS model.
Trubey, D.K.; McGill, B.L.
1980-08-01
This report consists of 24 papers which were presented at the seminar on Theory and Application of Monte Carlo Methods, held in Oak Ridge on April 21-23, plus a summary of the three-man panel discussion which concluded the seminar and two papers which were not given orally. These papers constitute a current statement of the state of the art of the theory and application of Monte Carlo methods for radiation transport problems in shielding and reactor physics.
Hart, Vern P; Doyle, Timothy E
2013-09-01
A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed.
Davis, Mitchell A.; Dunn, Andrew K.
2015-01-01
Few methods exist that can accurately handle dynamic light scattering in the regime between single and highly multiple scattering. We demonstrate dynamic light scattering Monte Carlo (DLS-MC), a numerical method by which the electric field autocorrelation function may be calculated for arbitrary geometries if the optical properties and particle motion are known or assumed. DLS-MC requires no assumptions regarding the number of scattering events, the final form of the autocorrelation function, or the degree of correlation between scattering events. Furthermore, the method is capable of rapidly determining the effect of particle motion changes on the autocorrelation function in heterogeneous samples. We experimentally validated the method and demonstrated that the simulations match both the expected form and the experimental results. We also demonstrate the perturbation capabilities of the method by calculating the autocorrelation function of flow in a representation of mouse microvasculature and determining the sensitivity to flow changes as a function of depth. PMID:26191723
NASA Astrophysics Data System (ADS)
Rodionov, V. A.; Zhuravlev, V. A.
2017-01-01
In this work, the simulations of magnetic properties of nano-sized manganese ferrite particles with zinc replacement were performed. The percentage of replacement laid in range from 0% to 80%. The parameters of particles, including exchange integrals, were taken from experimental data received for MnxZn1-xFe2O4. The sizes of particles and thickness of defective surface layer were taken, taking into account real sizes distribution for manganese nanoparticles received by the way of mechanochemical synthesis. Simulations were performed using the Monte-Carlo methods, Metropolis algorithm.
Calculation of angular distribution of 662 keV gamma rays by Monte Carlo method in copper medium.
Kahraman, A; Ozmutlu, E N; Gurler, O; Yalcin, S; Kaynak, G; Gundogdu, O
2009-12-01
This paper presents results on the angular distribution of Compton scattering of 662 keV gamma photons in both forward and backward hemispheres in copper medium. The number of scattered events graph has been determined for scattered gamma photons in both the forward and backward hemispheres and theoretical saturation thicknesses have been obtained using these results. Furthermore, response function of a 51 x 51 mm NaI(Tl) detector at 60 degrees angle with incoming photons scattered from a 10mm thick copper layer has been determined using Monte Carlo method.
NASA Astrophysics Data System (ADS)
Khisamutdinov, A. I.; Velker, N. N.
2014-05-01
The talk examines a system of pairwise interaction particles, which models a rarefied gas in accordance with the nonlinear Boltzmann equation, the master equations of Markov evolution of this system and corresponding numerical Monte Carlo methods. Selection of some optimal method for simulation of rarefied gas dynamics depends on the spatial size of the gas flow domain. For problems with the Knudsen number Kn of order unity "imitation", or "continuous time", Monte Carlo methods ([2]) are quite adequate and competitive. However if Kn <= 0.1 (the large sizes), excessive punctuality, namely, the need to see all the pairs of particles in the latter, leads to a significant increase in computational cost(complexity). We are interested in to construct the optimal methods for Boltzmann equation problems with large enough spatial sizes of the flow. Speaking of the optimal, we mean that we are talking about algorithms for parallel computation to be implemented on high-performance multi-processor computers. The characteristic property of large systems is the weak dependence of sub-parts of each other at a sufficiently small time intervals. This property is taken into account in the approximate methods using various splittings of operator of corresponding master equations. In the paper, we develop the approximate method based on the splitting of the operator of master equations system "over groups of particles" ([7]). The essence of the method is that the system of particles is divided into spatial subparts which are modeled independently for small intervals of time, using the precise"imitation" method. The type of splitting used is different from other well-known type "over collisions and displacements", which is an attribute of the known Direct simulation Monte Carlo methods. The second attribute of the last ones is the grid of the "interaction cells", which is completely absent in the imitation methods. The main of talk is parallelization of the imitation algorithms with
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.
1990-01-01
Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.
Lin, Uei-Tyng; Chu, Chien-Hau
2006-05-01
Monte Carlo method was used to simulate the correction factors for electron loss and scattered photons for two improved cylindrical free-air ionization chambers (FACs) constructed at the Institute of Nuclear Energy Research (INER, Taiwan). The method is based on weighting correction factors for mono-energetic photons with X-ray spectra. The newly obtained correction factors for the medium-energy free-air chamber were compared with the current values, which were based on a least-squares fit to experimental data published in the NBS Handbook 64 [Wyckoff, H.O., Attix, F.H., 1969. Design of free-air ionization chambers. National Bureau Standards Handbook, No. 64. US Government Printing Office, Washington, DC, pp. 1-16; Chen, W.L., Su, S.H., Su, L.L., Hwang, W.S., 1999. Improved free-air ionization chamber for the measurement of X-rays. Metrologia 36, 19-24]. The comparison results showed the agreement between the Monte Carlo method and experimental data is within 0.22%. In addition, mono-energetic correction factors for the low-energy free-air chamber were calculated. Average correction factors were then derived for measured and theoretical X-ray spectra at 30-50 kVp. Although the measured and calculated spectra differ slightly, the resulting differences in the derived correction factors are less than 0.02%.
Lele, Subhash R; Dennis, Brian; Lutscher, Frithjof
2007-07-01
We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise.
Curis, Emmanuel; Bénazeth, Simone
2005-05-01
An important step in X-ray absorption spectroscopy (XAS) analysis is the fitting of a model to the experimental spectra, with a view to obtaining structural parameters. It is important to estimate the errors on these parameters, and three methods are used for this purpose. This article presents the conditions for applying these methods. It is shown that the usual equation Sigma = 2H(-1) is not applicable for fitting in R space or on filtered XAS data; a formula is established to treat these cases, and the equivalence between the usual formula and the brute-force method is evidenced. Lastly, the problem of the nonlinearity of the XAS models and a comparison with Monte Carlo methods are addressed.
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
Modeling of radiation-induced bystander effect using Monte Carlo methods
NASA Astrophysics Data System (ADS)
Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun
2009-03-01
Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.
A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams
Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G
2006-09-28
A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.
CORAL software: prediction of carcinogenicity of drugs by means of the Monte Carlo method.
Toropova, Alla P; Toropov, Andrey A
2014-02-14
Methodology of building up and validation of models for carcinogenic potentials of drugs by means of the CORAL software is described. The QSAR analysis by the CORAL software includes three phases: (i) definition of preferable parameters for the optimization procedure that gives maximal correlation coefficient between endpoint and an optimal descriptor that is calculated with so-called correlation weights of various molecular features; (ii) detection of molecular features with stable positive correlation weights or vice versa stable negative correlation weights (molecular features which are characterized by solely positive or solely negative correlation weights obtained for several starts of the Monte Carlo optimization are a basis for mechanistic interpretations of the model); and (iii) building up the model that is satisfactory from point of view of reliable probabilistic criteria and OECD principles. The methodology is demonstrated for the case of carcinogenicity of a large set (n = 1464) of organic compounds which are potential or actual pharmaceutical agents.
Torsional path integral Monte Carlo method for the quantum simulation of large molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F.; Clary, David C.
2002-05-01
A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.
McGraw, David; Hershey, Ronald L.
2016-06-01
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little
Stoller, Roger E; Golubov, Stanislav I; Becquart, C. S.; Domain, C.
2007-08-01
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~s to years). Although the rate theory and Monte Carlo models can be used simulate the same phenomena, some of the details are handled quite differently in the two approaches. Models employing the rate theory have been extensively used to describe radiation-induced phenomena such as void swelling and irradiation creep. The primary approximations in such models are time- and spatial averaging of the radiation damage source term, and spatial averaging of the microstructure into an effective medium. Kinetic Monte Carlo models can account for these spatial and temporal correlations; their primary limitation is the computational burden which is related to the size of the simulation cell. A direct comparison of RT and object kinetic MC simulations has been made in the domain of point defect cluster dynamics modeling, which is relevant to the evolution (both nucleation and growth) of radiation-induced defect structures. The primary limitations of the OKMC model are related to computational issues. Even with modern computers, the maximum simulation cell size and the maximum dose (typically much less than 1 dpa) that can be simulated are limited. In contrast, even very detailed RT models can simulate microstructural evolution for doses up 100 dpa or greater in clock times that are relatively short. Within the context of the effective medium, essentially any defect density can be simulated. Overall, the agreement between the two methods is best for irradiation conditions which produce a high density of defects (lower temperature and higher displacement rate), and for
NASA Astrophysics Data System (ADS)
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
Mosleh-Shirazi, Mohammad Amin; Karbasi, Sareh; Shahbazi-Gahrouei, Daryoush; Monadi, Shahram
2012-11-08
Full buildup diodes can cause significant dose perturbation if they are used on most or all of radiotherapy fractions. Given the importance of frequent in vivo measurements in complex treatments, using thin buildup (low-perturbation) diodes instead is gathering interest. However, such diodes are strictly unsuitable for high-energy photons; therefore, their use requires evaluation and careful measurement of correction factors (CFs). There is little published data on such factors for low-perturbation diodes, and none on diode characterization for 9 MV X-rays. We report on MCNP4c Monte Carlo models of low-perturbation (EDD5) and medium-perturbation (EDP10) diodes, and a comparison of source-to-surface distance, field size, temperature, and orientation CFs for cobalt-60 and 9 MV beams. Most of the simulation results were within 4% of the measurements. The results suggest against the use of the EDD5 in axial angles beyond ± 50° and exceeding the range 0° to +50° tilt angle at 9 MV. Outside these ranges, although the EDD5 can be used for accurate in vivo dosimetry at 9 MV, its CF variations were found to be 1.5-7.1 times larger than the EDP10 and, therefore, should be applied carefully. Finally, the MCNP diode models are sufficiently reliable tools for independent verification of potentially inaccurate measurements.
Dupuis, Paul
2014-03-14
This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Williams, Michael S; Ebel, Eric D
2014-11-18
The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the
Chetty, Indrin J; Moran, Jean M; McShan, Daniel L; Fraass, Benedick A; Wilderman, Scott J; Bielajew, Alex F
2002-06-01
A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +/- 2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations.
Zhang, Yong; Chen, Bin; Li, Dong
2016-04-01
To investigate the influence of polarization on the polarized light propagation in biological tissue, a polarized geometric Monte Carlo method is developed. The Stokes-Mueller formalism is expounded to describe the shifting of light polarization during propagation events, including scattering and interface interaction. The scattering amplitudes and optical parameters of different tissue structures are obtained using Mie theory. Through simulations of polarized light (pulsed dye laser at wavelength of 585 nm) propagation in an infinite slab tissue model and a discrete vessel tissue model, energy depositions in tissue structures are calculated and compared with those obtained through general geometric Monte Carlo simulation under the same parameters but without consideration of polarization effect. It is found that the absorption depth of the polarized light is about one half of that determined by conventional simulations. In the discrete vessel model, low penetrability manifests in three aspects: diffuse reflection became the main contributor to the energy escape, the proportion of epidermal energy deposition increased significantly, and energy deposition in the blood became weaker and more uneven. This may indicate that the actual thermal damage of epidermis during the real-world treatment is higher and the deep buried blood vessels are insufficiently damaged by consideration of polarization effect, compared with the conventional prediction.
Ridikas, D; Feray, S; Cometto, M; Damoy, F
2005-01-01
During the decommissioning of the SATURNE accelerator at CEA Saclay (France), a number of concrete containers with radioactive materials of low or very low activity had to be characterised before their final storage. In this paper, a non-destructive approach combining gamma ray spectroscopy and Monte Carlo simulations is used in order to characterise massive concrete blocks containing some radioactive waste. The limits and uncertainties of the proposed method are quantified for the source term activity estimates using 137Cs as a tracer element. A series of activity measurements with a few representative waste containers were performed before and after destruction. It has been found that neither was the distribution of radioactive materials homogeneous nor was its density unique, and this became the major source of systematic errors in this study. Nevertheless, we conclude that by combining gamma ray spectroscopy and full scale Monte Carlo simulations one can estimate the source term activity for some tracer elements such as 134Cs, 137Cs, 60Co, etc. The uncertainty of this estimation should not be bigger than a factor of 2-3.
NASA Astrophysics Data System (ADS)
Nagakura, Hiroki; Richers, Sherwood; Ott, Christian; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi
2017-01-01
We have developed a multi-d radiation-hydrodynamic code which solves first-principles Boltzmann equation for neutrino transport. It is currently applicable specifically for core-collapse supernovae (CCSNe), but we will extend their applicability to further extreme phenomena such as black hole formation and coalescence of double neutron stars. In this meeting, I will discuss about two things; (1) detailed comparison with a Monte-Carlo neutrino transport (2) axisymmetric CCSNe simulations. The project (1) gives us confidence of our code. The Monte-Carlo code has been developed by Caltech group and it is specialized to obtain a steady state. Among CCSNe community, this is the first attempt to compare two different methods for multi-d neutrino transport. I will show the result of these comparison. For the project (2), I particularly focus on the property of neutrino distribution function in the semi-transparent region where only first-principle Boltzmann solver can appropriately handle the neutrino transport. In addition to these analyses, I will also discuss the ``explodability'' by neutrino heating mechanism.
Myers, Chris; Kirk, Bernadette Lugue; Leal, Luiz C
2007-01-01
The data used in two Monte Carlo (MC) codes, EGSnrc and MCNPX were compared and a majority of the data used in MCNPX was imported into EGSnrc. The effects of merging the data of the two codes were then examined. MCNPX was run using the ITS electron step algorithm and the default data libraries mcplib04 and el03. Two runs are made with EGSnrc. The first simulation uses the default PEGS cross section library. The second simulation utilizes the data imported from MCNPX. All energy threshold values and physics options are made to be identical. A simple case was created in both EGSnrc and MCNPX that calculates the radial depth dose from an isotropically radiating disc in water for various incident, monoenergetic photon and electron energies. Initial results show that much less central processing unit (cpu) time is required by the EGSnrc code for simulations involving large numbers of particles, primarily electrons, when compared to MCNPX. The detailed particle history files - ptrac and iwatch - are investigated to compare the number and types of events being simulated in order to determine the reasons for the run time differences
Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
Nease, Brian R. Ueki, Taro
2009-12-10
A time series approach has been applied to the nuclear fission source distribution generated by Monte Carlo (MC) particle transport in order to calculate the non-fundamental mode eigenvalues of the system. The novel aspect is the combination of the general technical principle of projection pursuit for multivariate data with the neutron multiplication eigenvalue problem in the nuclear engineering discipline. Proof is thoroughly provided that the stationary MC process is linear to first order approximation and that it transforms into one-dimensional autoregressive processes of order one (AR(1)) via the automated choice of projection vectors. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern MC codes for nuclear criticality calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. This time series approach was tested for a variety of problems including multi-dimensional ones. Numerical results show that the time series approach has strong potential for three dimensional whole reactor core. The eigenvalue ratio can be updated in an on-the-fly manner without storing the nuclear fission source distributions at all previous iteration cycles for the mean subtraction. Lastly, the effects of degenerate eigenvalues are investigated and solutions are provided.
DSMC calculations for the delta wing. [Direct Simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Celenligil, M. Cevdet; Moss, James N.
1990-01-01
Results are reported from three-dimensional direct simulation Monte Carlo (DSMC) computations, using a variable-hard-sphere molecular model, of hypersonic flow on a delta wing. The body-fitted grid is made up of deformed hexahedral cells divided into six tetrahedral subcells with well defined triangular faces; the simulation is carried out for 9000 time steps using 150,000 molecules. The uniform freestream conditions include M = 20.2, T = 13.32 K, rho = 0.00001729 kg/cu m, and T(wall) = 620 K, corresponding to lambda = 0.00153 m and Re = 14,000. The results are presented in graphs and briefly discussed. It is found that, as the flow expands supersonically around the leading edge, an attached leeside flow develops around the wing, and the near-surface density distribution has a maximum downstream from the stagnation point. Coefficients calculated include C(H) = 0.067, C(DP) = 0.178, C(DF) = 0.110, C(L) = 0.714, and C(D) = 1.089. The calculations required 56 h of CPU time on the NASA Langley Voyager CRAY-2 supercomputer.
NASA Astrophysics Data System (ADS)
McNab, Walt W.
2001-02-01
Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation.
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials.more » The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.« less
Analysis of Radiation Effects in Silicon using Kinetic Monte Carlo Methods
Hehr, Brian Douglas
2014-11-25
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. Neutron irradiation can instigate the formation of quasi-stable defect structures, thereby introducing new energy levels into the bandgap that alter carrier lifetimes and give rise to such phenomena as gain degradation in bipolar junction transistors. Normally, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. A kinetic Monte Carlo (KMC) code has been developed to model both thermal and carrier injection annealing of initial defect structures in semiconductor materials. The code is employed to investigate annealing in electron-irradiated, p-type silicon as well as the recovery of base current in silicon transistors bombarded with neutrons at the Los Alamos Neutron Science Center (LANSCE) “Blue Room” facility. Our results reveal that KMC calculations agree well with these experiments once adjustments are made, within the appropriate uncertainty bounds, to some of the sensitive defect parameters.
Multicomponent adsorption in mesoporous flexible materials with flat-histogram Monte Carlo methods
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Shen, Vincent K.
2016-11-01
We demonstrate an extensible flat-histogram Monte Carlo simulation methodology for studying the adsorption of multicomponent fluids in flexible porous solids. This methodology allows us to easily obtain the complete free energy landscape for the confined fluid-solid system in equilibrium with a bulk fluid of any arbitrary composition. We use this approach to study the adsorption of a prototypical coarse-grained binary fluid in "Hookean" solids, where the free energy of the solid may be described as a simple spring. However, our approach is fully extensible to solids with arbitrarily complex free energy profiles. We demonstrate that by tuning the fluid-solid interaction ranges, the inhomogeneous fluid structure inside the pore can give rise to enhanced selective capture of a larger species through cooperative adsorption with a smaller one. The maximum enhancement in selectivity is observed at low to intermediate pressures and is especially pronounced when the larger species is very dilute in the bulk. This suggest a mechanism by which the selective capture of a minor component from a bulk fluid may be enhanced.
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel; Gulley, Jeremy R.
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate how electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
Yang, J; Wongsa, S; Kadirkamanathan, V; Billings, S A; Wright, P C
2005-12-01
Metabolic flux analysis using 13C-tracer experiments is an important tool in metabolic engineering since intracellular fluxes are non-measurable quantities in vivo. Current metabolic flux analysis approaches are fully based on stoichiometric constraints and carbon atom balances, where the over-determined system is iteratively solved by a parameter estimation approach. However, the unavoidable measurement noises involved in the fractional enrichment data obtained by 13C-enrichment experiment and the possible existence of unknown pathways prevent a simple parameter estimation method for intracellular flux quantification. The MCMC (Markov chain-Monte Carlo) method, which obtains intracellular flux distributions through delicately constructed Markov chains, is shown to be an effective approach for deep understanding of the intracellular metabolic network. Its application is illustrated through the simulation of an example metabolic network.
Kang, Ki Mun; Jeong, Bae Kwon; Choi, Hoon Sik; Song, Jin Ho; Park, Byung-Do; Lim, Young Kyung; Jeong, Hojin
2016-01-01
This study was aimed to evaluate the effectiveness of Monte Carlo (MC) method in stereotactic radiotherapy for brain tumor. The difference in doses predicted by the conventional Ray-tracing (Ray) and the advanced MC algorithms was comprehensively investigated through the simulations for phantom and patient data, actual measurement of dose distribution, and the retrospective analysis of 77 brain tumors patients. These investigations consistently showed that the MC algorithm overestimated the dose than the Ray algorithm and the MC overestimation was generally increased as decreasing the beams size and increasing the number of beams delivered. These results demonstrated that the advanced MC algorithm would be inaccurate than the conventional Raytracing algorithm when applied to a (quasi-) homogeneous brain tumors. Thus, caution may be needed to apply the MC method to brain radiosurgery or radiotherapy. PMID:26871473
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
NASA Astrophysics Data System (ADS)
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel; Gulley, Jeremy R.
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron-phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron-electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate how electron-electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)
NASA Astrophysics Data System (ADS)
Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.
2007-02-01
Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.
Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.
PADOVANI, ENRICO
2012-04-15
Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.
Monte Carlo Method in optical diagnostics of skin and skin tissues
NASA Astrophysics Data System (ADS)
Meglinski, Igor V.
2003-12-01
A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Doppler flowmetry and Diffusing Wave Spectroscopy (DWS) are presented. In frame of the model skin represents as a complex inhomogeneous multi-layered medium, where the spatial distribution of blood and chromophores are variable within the depth. Taking into account variability of cells structure we represent the interfaces of skin layers as a quasi-random periodic wavy surfaces. The rough boundaries between the layers of different refractive indices play a significant role in the distribution of photons within the medium. The absorption properties of skin tissues in visible and NIR spectral region are estimated by taking into account the anatomical structure of skin as determined from histology, including the spatial distribution of blood vessels, water and melanin content. Model takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. Reasonable estimations for skin blood oxygen saturation and haematocrit are also included. The model is validated against analytic solution of the photon diffusion equation for semi-infinite homogeneous highly scattering medium. The results demonstrate that matching of the refractive index of the medium significantly improves the contrast and spatial resolution of the spatial photon sensitivity profile. It is also demonstrated that when model supplied with reasonable physical and structural parameters of biological tissues the results of skin reflectance spectra simulation
Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124
NASA Astrophysics Data System (ADS)
Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.
2015-03-01
Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.
Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124.
Moreau, M; Buvat, I; Ammour, L; Chouin, N; Kraeber-Bodéré, F; Chérel, M; Carlier, T
2015-03-21
Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.
Koger, B; Kirkby, C
2016-12-02
As a recent area of development in radiation therapy, gold nanoparticle (GNP) enhanced radiation therapy has shown potential to increase tumour dose while maintaining acceptable levels of healthy tissue toxicity. In this study, the effect of varying photon beam energy in GNP enhanced arc radiation therapy (GEART) is quantified through the introduction of a dose scoring metric, and GEART is compared to a conventional radiotherapy treatment. The PENELOPE Monte Carlo code was used to model several simple phantoms consisting of a spherical tumour containing GNPs (concentration: 15 mg Au g(-1) tumour, 0.8 mg Au g(-1) normal tissue) in a cylinder of tissue. Several monoenergetic photon beams, with energies ranging from 20 keV to 6 MeV, as well as 100, 200, and 300 kVp spectral beams, were used to irradiate the tumour in a 360° arc treatment. A dose metric was then used to compare tumour and tissue doses from GEART treatments to a similar treatment from a 6 MV spectrum. This was also performed on a simulated brain tumour using patient computed tomography data. GEART treatments showed potential over the 6 MV treatment for many of the simulated geometries, delivering up to 88% higher mean dose to the tumour for a constant tissue dose, with the effect greatest near a source energy of 50 keV. This effect is also seen with the inclusion of bone in a brain treatment, with a 14% increase in mean tumour dose over 6 MV, while still maintaining acceptable levels of dose to the bone and brain.
NASA Astrophysics Data System (ADS)
Koger, B.; Kirkby, C.
2016-12-01
As a recent area of development in radiation therapy, gold nanoparticle (GNP) enhanced radiation therapy has shown potential to increase tumour dose while maintaining acceptable levels of healthy tissue toxicity. In this study, the effect of varying photon beam energy in GNP enhanced arc radiation therapy (GEART) is quantified through the introduction of a dose scoring metric, and GEART is compared to a conventional radiotherapy treatment. The PENELOPE Monte Carlo code was used to model several simple phantoms consisting of a spherical tumour containing GNPs (concentration: 15 mg Au g-1 tumour, 0.8 mg Au g-1 normal tissue) in a cylinder of tissue. Several monoenergetic photon beams, with energies ranging from 20 keV to 6 MeV, as well as 100, 200, and 300 kVp spectral beams, were used to irradiate the tumour in a 360° arc treatment. A dose metric was then used to compare tumour and tissue doses from GEART treatments to a similar treatment from a 6 MV spectrum. This was also performed on a simulated brain tumour using patient computed tomography data. GEART treatments showed potential over the 6 MV treatment for many of the simulated geometries, delivering up to 88% higher mean dose to the tumour for a constant tissue dose, with the effect greatest near a source energy of 50 keV. This effect is also seen with the inclusion of bone in a brain treatment, with a 14% increase in mean tumour dose over 6 MV, while still maintaining acceptable levels of dose to the bone and brain.
NASA Astrophysics Data System (ADS)
Ustinov, E. A.
2017-01-01
The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.
Ustinov, E A
2017-01-21
The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V
2006-12-31
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
Pongsai, Suchaya
2010-07-30
In this article, the combination of the Metropolis Monte Carlo and Lattice Statics (MMC-LS) method is applied to perform the geometry optimization of crystalline aluminosilicate zeolite system in the presence of cationic species (H(+)), i.e., H-(Al)-ZSM-5. It has been proved that the MMC-LS method is very useful to allow H(+) ions in (Al)-ZSM-5 extra-framework to approach the global minimum energy sites. The crucial advantage of the combination MMC-LS method is that, in stead of simulating over thousands random configurations via the only LS method, the only one configuration is needed for the MMC-LS simulation to achieve the lowest energy configuration. Therefore, the calculation time can be substantially reduced via the performance of the MMC-LS method with respect to the only LS method. The calculated results obtained from the MMC-LS and the only LS methods have been comparatively represented in terms of the thermodynamic and structural properties.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
NASA Astrophysics Data System (ADS)
Pasini, José Miguel; Cordero, Patricio
2001-04-01
We study a one-dimensional granular gas of pointlike particles not subject to gravity between two walls at temperatures Tleft and Tright. The system exhibits two distinct regimes, depending on the normalized temperature difference Δ=(Tright- Tleft)/(Tright+Tleft): one completely fluidized and one in which a cluster coexists with the fluidized gas. When Δ is above a certain threshold, cluster formation is fully inhibited, obtaining a completely fluidized state. The mechanism that produces these two phases is explained. In the fluidized state the velocity distribution function exhibits peculiar non-Gaussian features. For this state, comparison between integration of the Boltzmann equation using the direct-simulation Monte Carlo method and results stemming from microscopic Newtonian molecular dynamics gives good coincidence, establishing that the non-Gaussian features observed do not arise from the onset of correlations.
NASA Astrophysics Data System (ADS)
Benacka, Jan
2016-08-01
This paper reports on lessons in which 18-19 years old high school students modelled random processes with Excel. In the first lesson, 26 students formulated a hypothesis on the area of ellipse by using the analogy between the areas of circle, square and rectangle. They verified the hypothesis by the Monte Carlo method with a spreadsheet model developed in the lesson. In the second lesson, 27 students analysed the dice poker game. First, they calculated the probability of the hands by combinatorial formulae. Then, they verified the result with a spreadsheet model developed in the lesson. The students were given a questionnaire to find out if they found the lesson interesting and contributing to their mathematical and technological knowledge.
Ganesh, P; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A; Kent, Paul R C
2014-12-09
Highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. The results demonstrate that the lithium-carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches.
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2014-04-01
Traditional Ensemble Kalman Filter (EnKF) data assimilation requires computationally intensive Monte Carlo (MC) sampling, which suffers from filter inbreeding unless the number of simulations is large. Recently we proposed an alternative EnKF groundwater-data assimilation method that obviates the need for sampling and is free of inbreeding issues. In our new approach, theoretical ensemble moments are approximated directly by solving a system of corresponding stochastic groundwater flow equations. Like MC-based EnKF, our moment equations (ME) approach allows Bayesian updating of system states and parameters in real-time as new data become available. Here we compare the performances and accuracies of the two approaches on two-dimensional transient groundwater flow toward a well pumping water in a synthetic, randomly heterogeneous confined aquifer subject to prescribed head and flux boundary conditions.
NASA Astrophysics Data System (ADS)
Roomi, A.; Habibi, M.; Saion, E.; Amrollahi, R.
2011-02-01
In this study we present Monte Carlo method for obtaining the time-resolved energy spectra of neutrons emitted by D-D reaction in plasma focus devices. Angular positions of detectors obtained to maximum reconstruction of neutron spectrum. The detectors were arranged over a range of 0-22.5 m from the source and also at 0°, 30°, 60°, and 90° with respect to the central axis. The results show that an arrangement with five detectors placed at 0, 2, 7.5, 15 and 22.5 m around the central electrode of plasma focus as an anisotropic neutron source is required. As it shown in reconstructed spectrum, the distance between the neutron source and detectors is reduced and also the final reconstructed signal obtained with a very fine accuracy.
Wirawan, Rahadi; Waris, Abdul; Djamal, Mitra; Handayani, Gunawan
2015-04-16
The spectrum of gamma energy absorption in the NaI crystal (scintillation detector) is the interaction result of gamma photon with NaI crystal, and it’s associated with the photon gamma energy incoming to the detector. Through a simulation approach, we can perform an early observation of gamma energy absorption spectrum in a scintillator crystal detector (NaI) before the experiment conducted. In this paper, we present a simulation model result of gamma energy absorption spectrum for energy 100-700 keV (i.e. 297 keV, 400 keV and 662 keV). This simulation developed based on the concept of photon beam point source distribution and photon cross section interaction with the Monte Carlo method. Our computational code has been successfully predicting the multiple energy peaks absorption spectrum, which derived from multiple photon energy sources.
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
Ganesh, P.; Kim, Jeongnim; Park, Changwon; Yoon, Mina; Reboredo, Fernando A.; Kent, Paul R. C.
2014-11-03
In highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Moreover, the highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based on point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. Our results demonstrate that the lithium carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches.
Pasini, J M; Cordero, P
2001-04-01
We study a one-dimensional granular gas of pointlike particles not subject to gravity between two walls at temperatures T(left) and T(right). The system exhibits two distinct regimes, depending on the normalized temperature difference Delta=(T(right)-T(left))/(T(right)+T(left)): one completely fluidized and one in which a cluster coexists with the fluidized gas. When Delta is above a certain threshold, cluster formation is fully inhibited, obtaining a completely fluidized state. The mechanism that produces these two phases is explained. In the fluidized state the velocity distribution function exhibits peculiar non-Gaussian features. For this state, comparison between integration of the Boltzmann equation using the direct-simulation Monte Carlo method and results stemming from microscopic Newtonian molecular dynamics gives good coincidence, establishing that the non-Gaussian features observed do not arise from the onset of correlations.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.
2013-01-01
The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While
NASA Astrophysics Data System (ADS)
Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.
2014-05-01
Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution
Paul, Sudeshna; Friedman, Alan M.; Bailey-Kellogg, Chris; Craig, Bruce A.
2013-01-01
The interatomic distance distribution, P(r), is a valuable tool for evaluating the structure of a molecule in solution and represents the maximum structural information that can be derived from solution scattering data without further assumptions. Most current instrumentation for scattering experiments (typically CCD detectors) generates a finely pixelated two-dimensional image. In continuation of the standard practice with earlier one-dimensional detectors, these images are typically reduced to a one-dimensional profile of scattering intensities, I(q), by circular averaging of the two-dimensional image. Indirect Fourier transformation methods are then used to reconstruct P(r) from I(q). Substantial advantages in data analysis, however, could be achieved by directly estimating the P(r) curve from the two-dimensional images. This article describes a Bayesian framework, using a Markov chain Monte Carlo method, for estimating the parameters of the indirect transform, and thus P(r), directly from the two-dimensional images. Using simulated detector images, it is demonstrated that this method yields P(r) curves nearly identical to the reference P(r). Furthermore, an approach for evaluating spatially correlated errors (such as those that arise from a detector point spread function) is evaluated. Accounting for these errors further improves the precision of the P(r) estimation. Experimental scattering data, where no ground truth reference P(r) is available, are used to demonstrate that this method yields a scattering and detector model that more closely reflects the two-dimensional data, as judged by smaller residuals in cross-validation, than P(r) obtained by indirect transformation of a one-dimensional profile. Finally, the method allows concurrent estimation of the beam center and D max, the longest interatomic distance in P(r), as part of the Bayesian Markov chain Monte Carlo method, reducing experimental effort and providing a well defined protocol for these
Paul, Sudeshna; Friedman, Alan M; Bailey-Kellogg, Chris; Craig, Bruce A
2013-04-01
The interatomic distance distribution, P(r), is a valuable tool for evaluating the structure of a molecule in solution and represents the maximum structural information that can be derived from solution scattering data without further assumptions. Most current instrumentation for scattering experiments (typically CCD detectors) generates a finely pixelated two-dimensional image. In contin-uation of the standard practice with earlier one-dimensional detectors, these images are typically reduced to a one-dimensional profile of scattering inten-sities, I(q), by circular averaging of the two-dimensional image. Indirect Fourier transformation methods are then used to reconstruct P(r) from I(q). Substantial advantages in data analysis, however, could be achieved by directly estimating the P(r) curve from the two-dimensional images. This article describes a Bayesian framework, using a Markov chain Monte Carlo method, for estimating the parameters of the indirect transform, and thus P(r), directly from the two-dimensional images. Using simulated detector images, it is demonstrated that this method yields P(r) curves nearly identical to the reference P(r). Furthermore, an approach for evaluating spatially correlated errors (such as those that arise from a detector point spread function) is evaluated. Accounting for these errors further improves the precision of the P(r) estimation. Experimental scattering data, where no ground truth reference P(r) is available, are used to demonstrate that this method yields a scattering and detector model that more closely reflects the two-dimensional data, as judged by smaller residuals in cross-validation, than P(r) obtained by indirect transformation of a one-dimensional profile. Finally, the method allows concurrent estimation of the beam center and Dmax, the longest interatomic distance in P(r), as part of the Bayesian Markov chain Monte Carlo method, reducing experimental effort and providing a well defined protocol for these
NASA Technical Reports Server (NTRS)
Tsang, L.; Lou, S. H.; Chan, C. H.
1991-01-01
The extended boundary condition method is applied to Monte Carlo simulations of two-dimensional random rough surface scattering. The numerical results are compared with one-dimensional random rough surfaces obtained from the finite-element method. It is found that the mean scattered intensity from two-dimensional rough surfaces differs from that of one dimension for rough surfaces with large slopes.
NASA Technical Reports Server (NTRS)
Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.
2004-01-01
Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).
Monte Carlo comparison of preliminary methods for ordering multiple genetic loci.
Olson, J M; Boehnke, M
1990-09-01
We carried out a simulation study to compare the power of eight methods for preliminary ordering of multiple genetic loci. Using linkage groups of six loci and a simple pedigree structure, we considered the effects on method performance of locus informativity, interlocus spacing, total distance along the chromosome, and sample size. Method performance was assessed using the mean rank of the true order, the proportion of replicates in which the true order was the best order, and the number of orders that needed to be considered for subsequent multipoint linkage analysis in order to include the true order with high probability. A new method which maximizes the sum of adjacent two-point maximum lod scores divided by the equivalent number of informative meioses and the previously described method which minimizes the sum of adjacent recombination fraction estimates were found to be the best overall locus-ordering methods for the situations considered, although several other methods also performed well.
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
NASA Astrophysics Data System (ADS)
Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jean-Pierre; Torfeh, Tarraf
2010-04-01
In this paper, we deepen the R&D program named DTO-DC (Digital Object Test and Dosimetric Console), which goal is to develop an efficient, accurate and full method to achieve dosimetric quality control (QC) of radiotherapy treatment planning system (TPS). This method is mainly based on Digital Test Objects (DTOs) and on Monte Carlo (MC) simulation using the PENELOPE code [1]. These benchmark simulations can advantageously replace experimental measures typically used as reference for comparison with TPS calculated dose. Indeed, the MC simulations rather than dosimetric measurements allow contemplating QC without tying treatment devices and offer in many situations (i.p. heterogeneous medium, lack of scattering volume...) better accuracy compared to dose measurements with classical dosimetry equipment of a radiation therapy department. Furthermore using MC simulations and DTOs, i.e. a totally numerical QC tools, will also simplify QC implementation, and enable process automation; this allows radiotherapy centers to have a more complete and thorough QC. The program DTO-DC was established primarily on ELEKTA accelerator (photons mode) using non-anatomical DTOs [2]. Today our aim is to complete and apply this program on VARIAN accelerator (photons and electrons mode) using anatomical DTOs. First, we developed, modeled and created three anatomical DTOs in DICOM format: 'Head and Neck', Thorax and Pelvis. We parallelized the PENELOPE code using MPI libraries to accelerate their calculation, we have modeled in PENELOPE geometry Clinac head of Varian Clinac 2100CD (photons mode). Then, to implement this method, we calculated the dose distributions in Pelvis DTO using PENELOPE and ECLIPSE TPS. Finally we compared simulated and calculated dose distributions employing the relative difference proposed by Venselaar [3]. The results of this work demonstrate the feasibility of this method that provides a more accurate and easily achievable QC. Nonetheless, this method, implemented
ERIC Educational Resources Information Center
Carsey, Thomas M.; Harden, Jeffrey J.
2015-01-01
Graduate students in political science come to the discipline interested in exploring important political questions, such as "What causes war?" or "What policies promote economic growth?" However, they typically do not arrive prepared to address those questions using quantitative methods. Graduate methods instructors must…
Monte Carlo Library Least Square (MCLLS) Method for Multiple Radioactive Particle Tracking in BPR
NASA Astrophysics Data System (ADS)
Wang, Zhijian; Lee, Kyoung; Gardner, Robin
2010-03-01
In This work, a new method of radioactive particles tracking is proposed. An accurate Detector Response Functions (DRF's) was developed from MCNP5 to generate library for NaI detectors with a significant speed-up factor of 200. This just make possible for the idea of MCLLS method which is used for locating and tracking the radioactive particle in a modular Pebble Bed Reactor (PBR) by searching minimum Chi-square values. The method was tested to work pretty good in our lab condition with a six 2" X 2" NaI detectors array only. This method was introduced in both forward and inverse ways. A single radioactive particle tracking system with three collimated 2" X 2" NaI detectors is used for benchmark purpose.
Advanced computational methods for nodal diffusion, Monte Carlo, and S{sub n} problems. Final Report
1994-12-31
The work addresses basic computational difficulties that arise in the numerical simulation of neutral particle radiation transport: discretized radiation transport problems, iterative methods, selection of parameters, and extension of current algorithms.
Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...
Booth, George H; Cleland, Deidre; Thom, Alex J W; Alavi, Ali
2011-08-28
The full configuration interaction quantum Monte Carlo (FCIQMC) method, as well as its "initiator" extension (i-FCIQMC), is used to tackle the complex electronic structure of the carbon dimer across the entire dissociation reaction coordinate, as a prototypical example of a strongly correlated molecular system. Various basis sets of increasing size up to the large cc-pVQZ are used, spanning a fully accessible N-electron basis of over 10(12) Slater determinants, and the accuracy of the method is demonstrated in each basis set. Convergence to the FCI limit is achieved in the largest basis with only O[10(7)] walkers within random errorbars of a few tenths of a millihartree across the binding curve, and extensive comparisons to FCI, CCSD(T), MRCI, and CEEIS results are made where possible. A detailed exposition of the convergence properties of the FCIQMC methods is provided, considering convergence with elapsed imaginary time, number of walkers and size of the basis. Various symmetries which can be incorporated into the stochastic dynamic, beyond the standard abelian point group symmetry and spin polarisation are also described. These can have significant benefit to the computational effort of the calculations, as well as the ability to converge to various excited states. The results presented demonstrate a new benchmark accuracy in basis-set energies for systems of this size, significantly improving on previous state of the art estimates.
Geng, Bo; Zhou, Xiaobo; Zhu, Jinmin; Hung, Y S; Wong, Stephen T C
2008-04-01
Computational identification of missing enzymes plays a significant role in accurate and complete reconstruction of metabolic network for both newly sequenced and well-studied organisms. For a metabolic reaction, given a set of candidate enzymes identified according to certain biological evidences, a powerful mathematical model is required to predict the actual enzyme(s) catalyzing the reactions. In this study, several plausible predictive methods are considered for the classification problem in missing enzyme identification, and comparisons are performed with an aim to identify a method with better performance than the Bayesian model used in previous work. In particular, a regression model consisting of a linear term and a nonlinear term is proposed to apply to the problem, in which the reversible jump Markov-chain-Monte-Carlo (MCMC) learning technique (developed in [Andrieu C, Freitas Nando de, Doucet A. Robust full Bayesian learning for radial basis networks 2001;13:2359-407.]) is adopted to estimate the model order and the parameters. We evaluated the models using known reactions in Escherichia coli, Mycobacterium tuberculosis, Vibrio cholerae and Caulobacter cresentus bacteria, as well as one eukaryotic organism, Saccharomyces Cerevisiae. Although support vector regression also exhibits comparable performance in this application, it was demonstrated that the proposed model achieves favorable prediction performance, particularly sensitivity, compared with the Bayesian method.
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Good, Brian; Noebe, Ronald D.; Honecy, Frank; Abel, Phillip
1999-01-01
Large-scale simulations of dynamic processes at the atomic level have developed into one of the main areas of work in computational materials science. Until recently, severe computational restrictions, as well as the lack of accurate methods for calculating the energetics, resulted in slower growth in the area than that required by current alloy design programs. The Computational Materials Group at the NASA Lewis Research Center is devoted to the development of powerful, accurate, economical tools to aid in alloy design. These include the BFS (Bozzolo, Ferrante, and Smith) method for alloys (ref. 1) and the development of dedicated software for large-scale simulations based on Monte Carlo- Metropolis numerical techniques, as well as state-of-the-art visualization methods. Our previous effort linking theoretical and computational modeling resulted in the successful prediction of the microstructure of a five-element intermetallic alloy, in excellent agreement with experimental results (refs. 2 and 3). This effort also produced a complete description of the role of alloying additions in intermetallic binary, ternary, and higher order alloys (ref. 4).
NASA Astrophysics Data System (ADS)
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
Monte Carlo simulations of Higgs-boson production at the LHC with the KrkNLO method
NASA Astrophysics Data System (ADS)
Jadach, S.; Nail, G.; Płaczek, W.; Sapeta, S.; Siódmok, A.; Skrzypek, M.
2017-03-01
We present numerical tests and predictions of the KrkNLO method for matching of NLO QCD corrections to hard processes with LO parton-shower Monte Carlo generators (NLO+PS). This method was described in detail in our previous publications, where it was also compared with other NLO+PS matching approaches ( MC@NLO and POWHEG) as well as fixed-order NLO and NNLO calculations. Here we concentrate on presenting some numerical results (cross sections and distributions) for Z/γ ^* (Drell-Yan) and Higgs-boson production processes at the LHC. The Drell-Yan process is used mainly to validate the KrkNLO implementation in the Herwig 7 program with respect to the previous implementation in Sherpa. We also show predictions for this process with the new, complete, MC-scheme parton distribution functions and compare them with our previously published results. Then we present the first results of the KrkNLO method for Higgs production in gluon-gluon fusion at the LHC and compare them with MC@NLO and POWHEG predictions from Herwig 7, fixed-order results from HNNLO and a resummed calculation from HqT, as well as with experimental data from the ATLAS collaboration.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
NASA Astrophysics Data System (ADS)
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
Çatli, Serap
2015-09-01
High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10×10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in consideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simulation is a good approach to derive the dose distribution in heterogeneous media. PACS numbers: 87.55.K.
Çatli, Serap
2015-09-08
High atomic number and density of dental implants leads to major problems at providing an accurate dose distribution in radiotherapy and contouring tumors and organs caused by the artifact in head and neck tumors. The limits and deficiencies of the algorithms using in the treatment planning systems can lead to large errors in dose calculation, and this may adversely affect the patient's treatment. In the present study, four commercial dental implants were used: pure titanium, titanium alloy (Ti-6Al-4V), amalgam, and crown. The effects of dental implants on dose distribution are determined with two methods: pencil beam convolution (PBC) algorithm and Monte Carlo code for 6 MV photon beam. The central axis depth doses were calculated on the phantom for a source-skin distance (SSD) of 100 cm and a 10 × 10 cm2 field using both of algorithms. The results of Monte Carlo method and Eclipse TPS were compared to each other and to those previously reported. In the present study, dose increases in tissue at a distance of 2 mm in front of the dental implants were seen due to the backscatter of electrons for dental implants at 6 MV using the Monte Carlo method. The Eclipse treatment planning system (TPS) couldn't precisely account for the backscatter radiation caused by the dental prostheses. TPS underestimated the back scatter dose and overestimated the dose after the dental implants. The large errors found for TPS in this study are due to the limits and deficiencies of the algorithms. The accuracy of the PBC algorithm of Eclipse TPS was evaluated in comparison to Monte Carlo calculations in consideration of the recommendations of the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 65. From the comparisons of the TPS and Monte Carlo calculations, it is verified that the Monte Carlo simulation is a good approach to derive the dose distribution in heterogeneous media.
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Luna, Byron Quan; Nadim, Farrokh
2013-04-01
An essential component of a quantitative landslide hazard assessment is establishing the extent of the endangered area. This task requires accurate prediction of the run-out behaviour of a landslide, which includes the estimation of the run-out distance, run-out width, velocities, pressures, and depth of the moving mass and the final configuration of the deposits. One approach to run-out modelling is to reproduce accurately the dynamics of the propagation processes. A number of dynamic numerical models are able to compute the movement of the flow over irregular topographic terrains (3-D) controlled by a complex interaction between mechanical properties that may vary in space and time. Given the number of unknown parameters and the fact that most of the rheological parameters cannot be measured in the laboratory or field, the parametrization of run-out models is very difficult in practice. For this reason, the application of run-out models is mostly used for back-analysis of past events and very few studies have attempted to achieve forward predictions. Consequently all models are based on simplified descriptions that attempt to reproduce the general features of the failed mass motion through the use of parameters (mostly controlling shear stresses at the base of the moving mass) which account for aspects not explicitly described or oversimplified. The uncertainties involved in the run-out process have to be approached in a stochastic manner. It is of significant importance to develop methods for quantifying and properly handling the uncertainties in dynamic run-out models, in order to allow a more comprehensive approach to quantitative risk assessment. A method was developed to compute the variation in run-out intensities by using a dynamic run-out model (MassMov2D) and a probabilistic framework based on a Monte Carlo simulation in order to analyze the effect of the uncertainty of input parameters. The probability density functions of the rheological parameters
Restricted Collision List method for faster Direct Simulation Monte-Carlo (DSMC) collisions
Macrossan, Michael N.
2016-08-15
The ‘Restricted Collision List’ (RCL) method for speeding up the calculation of DSMC Variable Soft Sphere collisions, with Borgnakke–Larsen (BL) energy exchange, is presented. The method cuts down considerably on the number of random collision parameters which must be calculated (deflection and azimuthal angles, and the BL energy exchange factors). A relatively short list of these parameters is generated and the parameters required in any cell are selected from this list. The list is regenerated at intervals approximately equal to the smallest mean collision time in the flow, and the chance of any particle re-using the same collision parameters in two successive collisions is negligible. The results using this method are indistinguishable from those obtained with standard DSMC. The CPU time saving depends on how much of a DSMC calculation is devoted to collisions and how much is devoted to other tasks, such as moving particles and calculating particle interactions with flow boundaries. For 1-dimensional calculations of flow in a tube, the new method saves 20% of the CPU time per collision for VSS scattering with no energy exchange. With RCL applied to rotational energy exchange, the CPU saving can be greater; for small values of the rotational collision number, for which most collisions involve some rotational energy exchange, the CPU may be reduced by 50% or more.
NASA Technical Reports Server (NTRS)
La Budde, R. A.
1972-01-01
Sampling techniques have been used previously to evaluate Jacobian determinants that occur in classical mechanical descriptions of molecular scattering. These determinants also occur in the quasiclassical approximation. A new technique is described which can be used to evaluate Jacobian determinants which occur in either description. This method is expected to be valuable in the study of reactive scattering using the quasiclassical approximation.
Restricted Collision List method for faster Direct Simulation Monte-Carlo (DSMC) collisions
NASA Astrophysics Data System (ADS)
Macrossan, Michael N.
2016-08-01
The 'Restricted Collision List' (RCL) method for speeding up the calculation of DSMC Variable Soft Sphere collisions, with Borgnakke-Larsen (BL) energy exchange, is presented. The method cuts down considerably on the number of random collision parameters which must be calculated (deflection and azimuthal angles, and the BL energy exchange factors). A relatively short list of these parameters is generated and the parameters required in any cell are selected from this list. The list is regenerated at intervals approximately equal to the smallest mean collision time in the flow, and the chance of any particle re-using the same collision parameters in two successive collisions is negligible. The results using this method are indistinguishable from those obtained with standard DSMC. The CPU time saving depends on how much of a DSMC calculation is devoted to collisions and how much is devoted to other tasks, such as moving particles and calculating particle interactions with flow boundaries. For 1-dimensional calculations of flow in a tube, the new method saves 20% of the CPU time per collision for VSS scattering with no energy exchange. With RCL applied to rotational energy exchange, the CPU saving can be greater; for small values of the rotational collision number, for which most collisions involve some rotational energy exchange, the CPU may be reduced by 50% or more.
A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2009-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.
An automated Monte-Carlo based method for the calculation of cascade summing factors
NASA Astrophysics Data System (ADS)
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
NASA Astrophysics Data System (ADS)
Bodammer, N. C.; Kaufmann, J.; Kanowski, M.; Tempelmann, C.
2009-02-01
Diffusion tensor tractography (DTT) allows one to explore axonal connectivity patterns in neuronal tissue by linking local predominant diffusion directions determined by diffusion tensor imaging (DTI). The majority of existing tractography approaches use continuous coordinates for calculating single trajectories through the diffusion tensor field. The tractography algorithm we propose is characterized by (1) a trajectory propagation rule that uses voxel centres as vertices and (2) orientation probabilities for the calculated steps in a trajectory that are obtained from the diffusion tensors of either two or three voxels. These voxels include the last voxel of each previous step and one or two candidate successor voxels. The precision and the accuracy of the suggested method are explored with synthetic data. Results clearly favour probabilities based on two consecutive successor voxels. Evidence is also provided that in any voxel-centre-based tractography approach, there is a need for a probability correction that takes into account the geometry of the acquisition grid. Finally, we provide examples in which the proposed fibre-tracking method is applied to the human optical radiation, the cortico-spinal tracts and to connections between Broca's and Wernicke's area to demonstrate the performance of the proposed method on measured data.
NASA Astrophysics Data System (ADS)
Lysak, Y. V.; Klimanov, V. A.; Narkevich, B. Ya
2017-01-01
One of the most difficult problems of modern radionuclide therapy (RNT) is control of the absorbed dose in pathological volume. This research presents new approach based on estimation of radiopharmaceutical (RP) accumulated activity value in tumor volume, based on planar scintigraphic images of the patient and calculated radiation transport using Monte Carlo method, including absorption and scattering in biological tissues of the patient, and elements of gamma camera itself. In our research, to obtain the data, we performed modeling scintigraphy of the vial with administered to the patient activity of RP in gamma camera, the vial was placed at the certain distance from the collimator, and the similar study was performed in identical geometry, with the same values of activity of radiopharmaceuticals in the pathological target in the body of the patient. For correct calculation results, adapted Fisher-Snyder human phantom was simulated in MCNP program. In the context of our technique, calculations were performed for different sizes of pathological targets and various tumors deeps inside patient’s body, using radiopharmaceuticals based on a mixed β-γ-radiating (131I, 177Lu), and clear β- emitting (89Sr, 90Y) therapeutic radionuclides. Presented method can be used for adequate implementing in clinical practice estimation of absorbed doses in the regions of interest on the basis of planar scintigraphy of the patient with sufficient accuracy.
Lumme, Sonja; Sund, Reijo; Leyland, Alastair H; Keskimäki, Ilmo
In this paper, we introduce several statistical methods to evaluate the uncertainty in the concentration index (C) for measuring socioeconomic equality in health and health care using aggregated total population register data. The C is a widely used index when measuring socioeconomic inequality, but previous studies have mainly focused on developing statistical inference for sampled data from population surveys. While data from large population-based or national registers provide complete coverage, registration comprises several sources of error. We simulate confidence intervals for the C with different Monte Carlo approaches, which take into account the nature of the population data. As an empirical example, we have an extensive dataset from the Finnish cause-of-death register on mortality amenable to health care interventions between 1996 and 2008. Amenable mortality has been often used as a tool to capture the effectiveness of health care. Thus, inequality in amenable mortality provides evidence on weaknesses in health care performance between socioeconomic groups. Our study shows using several approaches with different parametric assumptions that previously introduced methods to estimate the uncertainty of the C for sampled data are too conservative for aggregated population register data. Consequently, we recommend that inequality indices based on the register data should be presented together with an approximation of the uncertainty and suggest using a simulation approach we propose. The approach can also be adapted to other measures of equality in health.
Use of the Monte Carlo Method for OECD Principles-Guided QSAR Modeling of SIRT1 Inhibitors.
Kumar, Ashwani; Chauhan, Shilpi
2017-01-01
SIRT1 inhibitors offer therapeutic potential for the treatment of a number of diseases including cancer and human immunodeficiency virus infection. A diverse series of 45 compounds with reported SIRT1 inhibitory activity has been employed for the development of quantitative structure-activity relationship (QSAR) models using the Monte Carlo optimization method. This method makes use of simplified molecular input line entry system notation of the molecular structure. The QSAR models were built up according to OECD principles. Three subsets of three splits were examined and validated by respective external sets. All the three described models have good statistical quality. The best model has the following statistical characteristics: R(2) = 0.8350, Q(2)test = 0.7491 for the test set and R(2) = 0.9655, Q(2)ext = 0.9261 for the validation set. In the mechanistic interpretation, structural attributes responsible for the endpoint increase and decrease are defined. Further, the design of some prospective SIRT1 inhibitors is also presented on the basis of these structural attributes.
Accelerated kinetics of amorphous silicon using an on-the-fly off-lattice kinetic Monte-Carlo method
NASA Astrophysics Data System (ADS)
Joly, Jean-Francois; El-Mellouhi, Fedwa; Beland, Laurent Karim; Mousseau, Normand
2011-03-01
The time evolution of a series of well relaxed amorphous silicon models was simulated using the kinetic Activation-RelaxationTechnique (kART), an on-the-fly off-lattice kinetic Monte Carlo method. This novel algorithm uses the ART nouveau algorithm to generate activated events and links them with local topologies. It was shown to work well for crystals with few defects but this is the first time it is used to study an amorphous material. A parallel implementation allows us to increase the speed of the event generation phase. After each KMC step, new searches are initiated for each new topology encountered. Well relaxed amorphous silicon models of 1000 atoms described by a modified version of the empirical Stillinger-Weber potential were used as a starting point for the simulations. Initial results show that the method is faster by orders of magnitude compared to conventional MD simulations up to temperatures of 500 K. Vacancy-type defects were also introduced in this system and their stability and lifetimes are calculated.
Shin, Younghoon; Kwon, Hyuk-Sang
2016-03-21
We propose a Monte Carlo (MC) method based on a direct photon flux recording strategy using inhomogeneous, meshed rodent brain atlas. This MC method was inspired by and dedicated to fibre-optics-based optogenetic neural stimulations, thus providing an accurate and direct solution for light intensity distributions in brain regions with different optical properties. Our model was used to estimate the 3D light intensity attenuation for close proximity between an implanted optical fibre source and neural target area for typical optogenetics applications. Interestingly, there are discrepancies with studies using a diffusion-based light intensity prediction model, perhaps due to use of improper light scattering models developed for far-field problems. Our solution was validated by comparison with the gold-standard MC model, and it enabled accurate calculations of internal intensity distributions in an inhomogeneous near light source domain. Thus our strategy can be applied to studying how illuminated light spreads through an inhomogeneous brain area, or for determining the amount of light required for optogenetic manipulation of a specific neural target area.
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
NASA Astrophysics Data System (ADS)
LIU, B.; Liang, Y.
2015-12-01
Markov chain Monte Carlo (MCMC) simulation is a powerful statistical method in solving inverse problems that arise from a wide range of applications, such as nuclear physics, computational biology, financial engineering, among others. In Earth sciences applications of MCMC are primarily in the field of geophysics [1]. The purpose of this study is to introduce MCMC to geochemical inverse problems related to trace element fractionation during concurrent melting, melt transport and melt-rock reaction in the mantle. MCMC method has several advantages over linearized least squares methods in inverting trace element patterns in basalts and mantle rocks. First, MCMC can handle equations that have no explicit analytical solutions which are required by linearized least squares methods for gradient calculation. Second, MCMC converges to global minimum while linearized least squares methods may be stuck at a local minimum or converge slowly due to nonlinearity. Furthermore, MCMC can provide insight into uncertainties of model parameters with non-normal trade-off. We use MCMC to invert for extent of melting, amount of trapped melt, and extent of chemical disequilibrium between the melt and residual solid from REE data in abyssal peridotites from Central Indian Ridge and Mid-Atlantic Ridge. In the first step, we conduct forward calculation of REE evolution with melting models in a reasonable model space. We then build up a chain of melting models according to Metropolis-Hastings algorithm to represent the probability of specific model. We show that chemical disequilibrium is likely to play an important role in fractionating LREE in residual peridotites. In the future, MCMC will be applied to more realistic but also more complicated melting models in which partition coefficients, diffusion coefficients, as well as melting and melt suction rates vary as functions of temperature, pressure and mineral compositions. [1]. Sambridge & Mosegarrd [2002] Rev. Geophys.
Assaraf, Roland
2014-12-01
We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.
Auxiliary-Field Quantum Monte Carlo Method for Strongly Paired Fermions
2011-12-07
effective range: E/EFG = ξ + SkF re + · · · . A method is introduced to allow the use of a BCS trial wave function in the auxiliary-field quantum Monte...down by 0.02 to enable comparison of the slopes. universal in continuum Hamiltonians: ξ (re) = ξ + SkF re. Of course, a finite-range purely attractive...find results consistent with a universal dependence of the ground-state energy upon the effective range:E/EFG = ξ + SkF re + · · · with S = 0.12(0.03
Finch, W. Holmes; Bolin, Jocelyn H.; Kelley, Ken
2014-01-01
Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed. PMID:24904445
NASA Astrophysics Data System (ADS)
Puranik, Bhalchandra; Watvisave, Deepak; Bhandarkar, Upendra
2016-11-01
The interaction of a shock with a density interface is observed in several technological applications such as supersonic combustion, inertial confinement fusion, and shock-induced fragmentation of kidney and gall-stones. The central physical process in this interaction is the mechanism of the Richtmyer-Meshkov Instability (RMI). The specific situation where the density interface is initially an isolated spherical or cylindrical gas bubble presents a relatively simple geometry that exhibits all the essential RMI processes such as reflected and refracted shocks, secondary instabilities, turbulence and mixing of the species. If the incident shocks are strong, the calorically imperfect nature needs to be modelled. In the present work, we have carried out simulations of the shock-bubble interaction using the DSMC method for such situations. Specifically, an investigation of the shock-bubble interaction with diatomic gases involving rotational and vibrational excitations at high temperatures is performed, and the effects of such high temperature phenomena will be presented.
Probing gas adsorption in MOFs using an efficient ab initio widom insertion Monte Carlo method.
Lee, Youhan; Poloni, Roberta; Kim, Jihan
2016-12-15
We propose a novel biased Widom insertion method that can efficiently compute the Henry coefficient, KH , of gas molecules inside porous materials exhibiting strong adsorption sites by employing purely DFT calculations. This is achieved by partitioning the simulation volume into strongly and weakly adsorbing regions and selectively biasing the Widom insertion moves into the former region. We show that only few thousands of single point energy calculations are necessary to achieve accurate statistics compared to many hundreds of thousands or millions of such calculations in conventional random insertions. The methodology is used to compute the Henry coefficient for CO2 , N2 , CH4 , and C2 H2 in M-MOF-74(M = Zn and Mg), yielding good agreement with published experimental data. Our results demonstrate that the DFT binding energy and the heat of adsorption are not accurate enough indicators to rank the guest adsorption properties at the Henry regime. © 2016 Wiley Periodicals, Inc.
Suitable Candidates for Monte Carlo Solutions.
ERIC Educational Resources Information Center
Lewis, Jerome L.
1998-01-01
Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)
A Classroom Note on Monte Carlo Integration.
ERIC Educational Resources Information Center
Kolpas, Sid
1998-01-01
The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)
Hashemi, Bijan; Rahmani, Faezeh; Ebadi, Ahmad
2016-01-01
Purpose In this work, gold nanoparticles (GNPs) were embedded in the MAGIC-f polymer gel irradiated with the 192Ir brachytherapy sources. Material and methods At the first plexiglas phantom was made as the human pelvis. The GNPs were synthesized with 15 nm in diameter and 0.1 mM (0.0197 mg/ml) in concentration by using a chemical reduction method. Then, the MAGIC-f gel was synthesized. The fabricated gel was poured into the tubes located at the prostate (with and without the GNPs) locations of the phantom. The phantom was irradiated with 192Ir brachytherapy sources for prostate cancer. After 24 hours, the irradiated gels was read by using Siemens 1.5 Tesla MRI scanner. Following the brachytherapy practices, the absolute doses at the reference points and isodose curves were extracted and compared by experimental measurements and Monte Carlo (MC) simulations. Results The mean absorbed doses in the presence of the GNPs in prostate were 14% higher than the corresponding values without the GNPs in the brachytherapy. The gamma index analysis (between gel and MC) using 7%/7 mm was also applied to the data and a high pass rate achieved (91.7% and 86.4% for analysis with/without GNPs, respectively). Conclusions The real three-dimensional analysis shows the comparison of the dose-volume histograms measured for planning volumes and the expected one from the MC calculation. The results indicate that the polymer gel dosimetry method, which developed and used in this study, could be recommended as a reliable method for investigating the dose enhancement factor of GNPs in brachytherapy. PMID:27895684
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B; Jia, Xun
2015-05-07
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-01-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the
NASA Astrophysics Data System (ADS)
Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun
2015-05-01
Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the
Assessment of high-fidelity collision models in the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Weaver, Andrew B.
Advances in computer technology over the decades has allowed for more complex physics to be modeled in the DSMC method. Beginning with the first paper on DSMC in 1963, 30,000 collision events per hour were simulated using a simple hard sphere model. Today, more than 10 billion collision events can be simulated per hour for the same problem. Many new and more physically realistic collision models such as the Lennard-Jones potential and the forced harmonic oscillator model have been introduced into DSMC. However, the fact that computer resources are more readily available and higher-fidelity models have been developed does not necessitate their usage. It is important to understand how such high-fidelity models affect the output quantities of interest in engineering applications. The effect of elastic and inelastic collision models on compressible Couette flow, ground-state atomic oxygen transport properties, and normal shock waves have therefore been investigated. Recommendations for variable soft sphere and Lennard-Jones model parameters are made based on a critical review of recent ab-initio calculations and experimental measurements of transport properties.
Study on formation of step bunching on 6H-SiC (0001) surface by kinetic Monte Carlo method
NASA Astrophysics Data System (ADS)
Li, Yuan; Chen, Xuejiang; Su, Juan
2016-05-01
The formation and evolution of step bunching during step-flow growth of 6H-SiC (0001) surfaces were studied by three-dimensional kinetic Monte Carlo (KMC) method and compared with the analytic model based on the theory of Burton-Cabera-Frank (BCF). In the KMC model the crystal lattice was represented by a structured mesh which fixed the position of atoms and interatomic bonding. The events considered in the model were adatoms adsorption and diffusion on the terrace, and adatoms attachment, detachment and interlayer transport at the step edges. In addition, effects of Ehrlich-Schwoebel (ES) barriers at downward step edges and incorporation barriers at upwards step edges were also considered. In order to obtain more elaborate information for the behavior of atoms in the crystal surface, silicon and carbon atoms were treated as the minimal diffusing species. KMC simulation results showed that multiple-height steps were formed on the vicinal surface oriented toward [ 1 1 bar 00 ] or [ 11 2 bar 0 ] directions. And then the formation mechanism of the step bunching was analyzed. Finally, to further analyze the formation processes of step bunching, a one-dimensional BCF analytic model with ES and incorporation barriers was used, and then it was solved numerically. In the BCF model, the periodic boundary conditions (PBC) were applied, and the parameters were corresponded to those used in the KMC model. The evolution character of step bunching was consistent with the results obtained by KMC simulation.
Method for Fast CT/SPECT-Based 3D Monte Carlo Absorbed Dose Computations in Internal Emitter Therapy
Wilderman, S. J.; Dewaraja, Y. K.
2010-01-01
The DPM (Dose Planning Method) Monte Carlo electron and photon transport program, designed for fast computation of radiation absorbed dose in external beam radiotherapy, has been adapted to the calculation of absorbed dose in patient-specific internal emitter therapy. Because both its photon and electron transport mechanics algorithms have been optimized for fast computation in 3D voxelized geometries (in particular, those derived from CT scans), DPM is perfectly suited for performing patient-specific absorbed dose calculations in internal emitter therapy. In the updated version of DPM developed for the current work, the necessary inputs are a patient CT image, a registered SPECT image, and any number of registered masks defining regions of interest. DPM has been benchmarked for internal emitter therapy applications by comparing computed absorption fractions for a variety of organs using a Zubal phantom with reference results from the Medical Internal Radionuclide Dose (MIRD) Committee standards. In addition, the β decay source algorithm and the photon tracking algorithm of DPM have been further benchmarked by comparison to experimental data. This paper presents a description of the program, the results of the benchmark studies, and some sample computations using patient data from radioimmunotherapy studies using 131I. PMID:20305792
NASA Astrophysics Data System (ADS)
Kohno, R.; Hotta, K.; Nishioka, S.; Matsubara, K.; Tansho, R.; Suzuki, T.
2011-11-01
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
Cooper, Nicola J; Lambert, Paul C; Abrams, Keith R; Sutton, Alexander J
2007-01-01
This article focuses on the modelling and prediction of costs due to disease accrued over time, to inform the planning of future services and budgets. It is well documented that the modelling of cost data is often problematic due to the distribution of such data; for example, strongly right skewed with a significant percentage of zero-cost observations. An additional problem associated with modelling costs over time is that cost observations measured on the same individual at different time points will usually be correlated. In this study we compare the performance of four different multilevel/hierarchical models (which allow for both the within-subject and between-subject variability) for analysing healthcare costs in a cohort of individuals with early inflammatory polyarthritis (IP) who were followed-up annually over a 5-year time period from 1990/1991. The hierarchical models fitted included linear regression models and two-part models with log-transformed costs, and two-part model with gamma regression and a log link. The cohort was split into a learning sample, to fit the different models, and a test sample to assess the predictive ability of these models. To obtain predicted costs on the original cost scale (rather than the log-cost scale) two different retransformation factors were applied. All analyses were carried out using Bayesian Markov chain Monte Carlo (MCMC) simulation methods.
Veselinović, Aleksandar M; Veselinović, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikolić, Goran M
2014-01-01
Monte Carlo method has been used as a computational tool for building QSAR models for the reactivation of sarin inhibited acetylcholinesterase (AChE) by quaternary pyridinium oximes. Simplified molecular input line entry system (SMILES) together with hydrogen-suppressed graph (HSG) was used to represent molecular structure. Total number of considered oximes was 46 and activity was defined as logarithm of the AChE reactivation percentage by oximes with concentration of 0.001 M. One-variable models have been calculated with CORAL software for one data split into training, calibration and test set. Computational experiments indicated that this approach can satisfactorily predict the desired endpoint. Best QSAR model had the following statistical parameters: for training set r2=0.7096, s=0.177, MAE=0.148; calibration set: r2=0.6759, s=0.330, MAE=0.271 and test set: r2=0.8620, s=0.182, MAE=0.150. Structural indicators (SMILES based molecular fragments) for the increase and the decrease of the stated activity are defined. Using defined structural alerts computer aided design of new oxime derivatives with desired activity is presented.
Ganesh, P.; Kim, Jeongnim; Park, Changwon; ...
2014-11-03
In highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Moreover, the highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based onmore » point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. Our results demonstrate that the lithium carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches.« less
Hansson, Marie; Isaksson, Mats
2007-04-07
X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Kalyvas, N. I.; Valais, I. G.; Kandarakis, I. S.; Panayiotakis, G. S.
2014-03-01
The aim of the present study was to propose a comprehensive method for PET scanners image quality assessment, by the simulation of a thin layer chromatography (TLC) flood source with a previous validated Monte-Carlo (MC) model. The model was developed by using the GATE MC package and reconstructed images were obtained using the STIR software, with cluster computing. The PET scanner simulated was the GE Discovery-ST. The TLC source was immersed in 18F-FDG bath solution (1MBq) in order to assess image quality. The influence of different scintillating crystals on PET scanner's image quality, in terms of the MTF, the NNPS and the DQE, was investigated. Images were reconstructed by the commonly used FBP2D, FPB3DRP and the OSMAPOSL (15 subsets, 3 iterations) reprojection algorithms. The PET scanner configuration, incorporating LuAP crystals, provided the optimum MTF values in both 2D and 3DFBP whereas the corresponding configuration with BGO crystals was found with the higher MTF values after OSMAPOSL. The scanner incorporating BGO crystals were also found with the lowest noise levels and the highest DQE values after all image reconstruction algorithms. The plane source can be also useful for the experimental image quality assessment of PET and SPECT scanners in clinical practice.
Kuss, M.; Markel, T.; Kramer, W.
2011-01-01
Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.
NASA Astrophysics Data System (ADS)
Zou, Y. B.; Mao, S. F.; Da, B.; Ding, Z. J.
2016-12-01
A Monte Carlo simulation method for study of electron-solid interaction based on modeling of cascade secondary electron (SE) production and transportation has been used to determine the escape depth of emitted SE signals from amorphous solid specimens. The excitation depth distribution function and emission depth distribution function for, respectively, excited and emitted SEs are obtained at first based on the continuous medium approximation, whose product yields the secondary electron depth distribution function from which the mean escape depth (MED) of SEs is calculated. In this work, we study systematically the dependence of the MED on the atomic number of the specimen, primary energy, and incident angle of the incident electron beam. The derived MEDs of SEs for C, Ni, Cu, Ag, Pt, and Au specimens are found surprisingly to fall into a shallow sub-nanometer region, i.e., 0.4-0.9 nm, while Al and Si present larger values, due to elastic scattering effects. Furthermore, SE energy-depth distributions indicate that lower-energy SEs are escaped mainly from the greater depth region under the surface whereas higher-energy SEs are from the near-surface region. The results hence show that the SE emission is dominated by very thin top-surface layers in most cases, leading to the surface sensitivity of SEs.
MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics
Pater, P; Vallieres, M; Seuntjens, J
2014-06-15
Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dose deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges
Watté, Rodrigo; Aernouts, Ben; Van Beers, Robbe; Herremans, Els; Ho, Quang Tri; Verboven, Pieter; Nicolaï, Bart; Saeys, Wouter
2015-06-29
Monte Carlo methods commonly used in tissue optics are limited to a layered tissue geometry and thus provide only a very rough approximation for many complex media such as biological structures. To overcome these limitations, a Meshed Monte Carlo method with flexible phase function choice (fpf-MC) has been developed to function in a mesh. This algorithm can model the light propagation in any complexly shaped structure, by attributing optical properties to the different mesh elements. Furthermore, this code allows the use of different discretized phase functions for each tissue type, which can be simulated from the microstructural properties of the tissue, in combination with a tool for simulating the bulk optical properties of polydisperse suspensions. As a result, the scattering properties of tissues can be estimated from information on the microstructural properties of the tissue. This is important for the estimation of the bulk optical properties that can be used for the light propagation model, since many types of tissue have never been characterized in literature. The combination of these contributions, made it possible to use the MMC-fpf for modeling the light porapagation in plant tissue. The developed Meshed Monte Carlo code with flexible phase function choice (MMC-fpf) was successfully validated in simulation through comparison with the Monte Carlo code in Multi-Layered tissues (R2 > 0.9999) and experimentally by comparing the measured and simulated reflectance (RMSE = 0.015%) and transmittance (RMSE = 0.0815%) values for tomato leaves.
Modeling kinetics of a large-scale fed-batch CHO cell culture by Markov chain Monte Carlo method.
Xing, Zizhuo; Bishop, Nikki; Leister, Kirk; Li, Zheng Jian
2010-01-01
Markov chain Monte Carlo (MCMC) method was applied to model kinetics of a fed-batch Chinese hamster ovary cell culture process in 5,000-L bioreactors. The kinetic model consists of six differential equations, which describe dynamics of viable cell density and concentrations of glucose, glutamine, ammonia, lactate, and the antibody fusion protein B1 (B1). The kinetic model has 18 parameters, six of which were calculated from the cell culture data, whereas the other 12 were estimated from a training data set that comprised of seven cell culture runs using a MCMC method. The model was confirmed in two validation data sets that represented a perturbation of the cell culture condition. The agreement between the predicted and measured values of both validation data sets may indicate high reliability of the model estimates. The kinetic model uniquely incorporated the ammonia removal and the exponential function of B1 protein concentration. The model indicated that ammonia and lactate play critical roles in cell growth and that low concentrations of glucose (0.17 mM) and glutamine (0.09 mM) in the cell culture medium may help reduce ammonia and lactate production. The model demonstrated that 83% of the glucose consumed was used for cell maintenance during the late phase of the cell cultures, whereas the maintenance coefficient for glutamine was negligible. Finally, the kinetic model suggests that it is critical for B1 production to sustain a high number of viable cells. The MCMC methodology may be a useful tool for modeling kinetics of a fed-batch mammalian cell culture process.
NASA Astrophysics Data System (ADS)
Wen, Xiulan; Xu, Youxiong; Li, Hongsheng; Wang, Fenglin; Sheng, Danghong
2012-09-01
Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product specification(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.
NASA Astrophysics Data System (ADS)
Chen, X.; Rubin, Y.; Baldocchi, D. D.
2005-12-01
Understanding the interactions between soil, plant, and the atmosphere under water-stressed conditions is important for ecosystems where water availability is limited. In such ecosystems, the amount of water transferred from the soil to the atmosphere is controlled not only by weather conditions and vegetation type but also by soil water availability. Although researchers have proposed different approaches to model the impact of soil moisture on plant activities, the parameters involved are difficult to measure. However, using measurements of observed latent heat and carbon fluxes, as well as soil moisture data, Bayesian inversion methods can be employed to estimate the various model parameters. In our study, actual Evapotranspiration (ET) of an ecosystem is approximated by the Priestley-Taylor relationship, with the Priestley-Taylor coefficient modeled as a function of soil moisture content. Soil moisture limitation on root uptake is characterized in a similar manner as the Feddes' model. The inference of Bayesian inversion is processed within the framework of graphical theories. Due to the difficulty of obtaining exact inference, the Markov chain Monte Carlo (MCMC) method is implemented using a free software package, BUGS (Bayesian inference Using Gibbs Sampling). The proposed methodology is applied to a Mediterranean Oak-Savanna FLUXNET site in California, where continuous measurements of actual ET are obtained from eddy-covariance technique and soil moisture contents are monitored by several time domain reflectometry probes located within the footprint of the flux tower. After the implementation of Bayesian inversion, the posterior distributions of all the parameters exhibit enhancement in information compared to the prior distributions. The generated samples based on data in year 2003 are used to predict the actual ET in year 2004 and the prediction uncertainties are assessed in terms of confidence intervals. Our tests also reveal the usefulness of various
NASA Astrophysics Data System (ADS)
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model
NASA Astrophysics Data System (ADS)
Okada, Eiji; Schweiger, Martin; Arridge, Simon R.; Firbank, Michael; Delpy, David T.
1996-07-01
To validate models of light propagation in biological tissue, experiments to measure the mean time of flight have been carried out on several solid cylindrical layered phantoms. The optical properties of the inner cylinders of the phantoms were close to those of adult brain white matter, whereas a range of scattering or absorption coefficients was chosen for the outer layer. Experimental results for the mean optical path length have been compared with the predictions of both an exact Monte Carlo (MC) model and a diffusion equation, with two differing boundary conditions implemented in a finite-element method (FEM). The MC and experimental results are in good agreement despite poor statistics for large fiber spacings, whereas good agreement with the FEM prediction requires a careful choice of proper boundary conditions. measurement, Monte Carlo method, finite-element method.
Shahbazi-Gahrouei, Daryoush; Ayat, Saba
2012-07-01
Radioiodine therapy is an effective method for treating thyroid cancer carcinoma, but it has some affects on normal tissues, hence dosimetry of vital organs is important to weigh the risks and benefits of this method. The aim of this study is to measure the absorbed doses of important organs by Monte Carlo N Particle (MCNP) simulation and comparing the results of different methods of dosimetry by performing a t-paired test. To calculate the absorbed dose of thyroid, sternum, and cervical vertebra using the MCNP code, *F8 tally was used. Organs were simulated by using a neck phantom and Medical Internal Radiation Dosimetry (MIRD) method. Finally, the results of MCNP, MIRD, and Thermoluminescent dosimeter (TLD) measurements were compared by SPSS software. The absorbed dose obtained by Monte Carlo simulations for 100, 150, and 175 mCi administered (131)I was found to be 388.0, 427.9, and 444.8 cGy for thyroid, 208.7, 230.1, and 239.3 cGy for sternum and 272.1, 299.9, and 312.1 cGy for cervical vertebra. The results of paired t-test were 0.24 for comparing TLD dosimetry and MIRD calculation, 0.80 for MCNP simulation and MIRD, and 0.19 for TLD and MCNP. The results showed no significant differences among three methods of Monte Carlo simulations, MIRD calculation and direct experimental dosimetry using TLD.
NASA Astrophysics Data System (ADS)
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER -
Li, Jun; Calo, Victor M.
2013-09-15
We present a single-particle Lennard–Jones (L-J) model for CO{sub 2} and N{sub 2}. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO{sub 2} and N{sub 2} agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH{sub 4}, CO{sub 2} and N{sub 2} are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available.
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-01-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
NASA Astrophysics Data System (ADS)
Schiettekatte, François; Chicoine, Martin
2016-03-01
Corteo is a program that implements Monte Carlo (MC) method to simulate ion beam analysis (IBA) spectra of several techniques by following the ions trajectory until a sufficiently large fraction of them reach the detector to generate a spectrum. Hence, it fully accounts for effects such as multiple scattering (MS). Here, a version of Corteo is presented where the target can be a 2D or 3D image. This image can be derived from micrographs where the different compounds are identified, therefore bringing extra information into the solution of an IBA spectrum, and potentially significantly constraining the solution. The image intrinsically includes many details such as the actual surface or interfacial roughness, or actual nanostructures shape and distribution. This can for example lead to the unambiguous identification of structures stoichiometry in a layer, or at least to better constraints on their composition. Because MC computes in details the trajectory of the ions, it simulates accurately many of its aspects such as ions coming back into the target after leaving it (re-entry), as well as going through a variety of nanostructures shapes and orientations. We show how, for example, as the ions angle of incidence becomes shallower than the inclination distribution of a rough surface, this process tends to make the effective roughness smaller in a comparable 1D simulation (i.e. narrower thickness distribution in a comparable slab simulation). Also, in ordered nanostructures, target re-entry can lead to replications of a peak in a spectrum. In addition, bitmap description of the target can be used to simulate depth profiles such as those resulting from ion implantation, diffusion, and intermixing. Other improvements to Corteo include the possibility to interpolate the cross-section in angle-energy tables, and the generation of energy-depth maps.
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Lü, Jun-Guang; He, Kang-Lin; An, Zheng-Hua; Cai, Xiao; Dong, Ming-Yi; Fang, Jian; Hu, Tao; Liu, Wan-Jin; Lu, Qi-Wen; Ning, Fei-Peng; Sun, Li-Jun; Sun, Xi-Lei; Wang, Xiao-Dong; Xue, Zhen; Yu, Bo-Xiang; Zhang, Ai-Wu; Zhou, Li
2009-10-01
The BESIII detector has a high-resolution electromagnetic calorimeter which can be used for low momentum μ-π identification. Based on Monte Carlo simulations, μ-π separation was studied. A multilayer perceptron neural network making use of the defined variables was used to do the identification and a good μ-π separation result was obtained.
A Monte Carlo Study of a Method for Detecting a Change in the Slope of a Single Subject's Responses.
ERIC Educational Resources Information Center
Levy, Kenneth J.
1979-01-01
Hawkin's procedure for testing a sequence of observations for a shift in location could have applicability for assessing change within a single subject. Monte Carlo results suggest that Hawkins' procedure is robust with respect to moderate violations of its underlying assumptions of homogeneity of variance and normality. (Author/GDC)
NASA Astrophysics Data System (ADS)
Radaev, A. I.; Schurovskaya, M. V.
2015-12-01
The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.
Radaev, A. I. Schurovskaya, M. V.
2015-12-15
The choice of the spatial nodalization for the calculation of the power density and burnup distribution in a research reactor core with fuel assemblies of the IRT-3M and VVR-KN type using the program based on the Monte Carlo code is described. The influence of the spatial nodalization on the results of calculating basic neutronic characteristics and calculation time is investigated.
NASA Astrophysics Data System (ADS)
Gheorghe, Munteanu Bogdan; Alexei, Leahu; Sergiu, Cataranciuc
2013-09-01
We prove the limit theorem for life time distribution connected with reliability systems when their life time is a Pascal Convolution of independent and identically distributed random variables. We show that, in some conditions, such distributions may be approximated by means of Erlang distributions. As a consequnce, survival functions for such systems may be, respectively, approximated by Erlang survival functions. By using Monte Carlo method we experimantally confirm the theoretical results of our theorem.