DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J
Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less
NASA Astrophysics Data System (ADS)
Bernede, Adrien; Poëtte, Gaël
2018-02-01
In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.
Clay-catalyzed reactions of coagulant polymers during water chlorination
Lee, J.-F.; Liao, P.-M.; Lee, C.-K.; Chao, H.-P.; Peng, C.-L.; Chiou, C.T.
2004-01-01
The influence of suspended clay/solid particles on organic-coagulant reactions during water chlorination was investigated by analyses of total product formation potential (TPFP) and disinfection by-product (DBP) distribution as a function of exchanged clay cation, coagulant organic polymer, and reaction time. Montmorillonite clays appeared to act as a catalytic center where the reaction between adsorbed polymer and disinfectant (chlorine) was mediated closely by the exchanged clay cation. The transition-metal cations in clays catalyzed more effectively than other cations the reactions between a coagulant polymer and chlorine, forming a large number of volatile DBPs. The relative catalytic effects of clays/solids followed the order Ti-Mont > Fe-Mont > Cu-Mont > Mn-Mont > Ca-Mont > Na-Mont > quartz > talc. The effects of coagulant polymers on TPFP follow the order nonionic polymer > anionic polymer > cationic polymer. The catalytic role of the clay cation was further confirmed by the observed inhibition in DBP formation when strong chelating agents (o-phenanthroline and ethylenediamine) were added to the clay suspension. Moreover, in the presence of clays, total DBPs increased appreciably when either the reaction time or the amount of the added clay or coagulant polymer increased. For volatile DBPs, the formation of halogenated methanes was usually time-dependent, with chloroform and dichloromethane showing the greatest dependence. ?? 2003 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cramer, S. N.; Roussin, R. W.
1981-11-01
A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidenko, V. D., E-mail: Davidenko-VD@nrcki.ru; Zinchenko, A. S., E-mail: zin-sn@mail.ru; Harchenko, I. K.
2016-12-15
Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.
Derenzo, Stephen E
2017-01-01
This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values. PMID:28327464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derenzo, Stephen E.
Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less
Derenzo, Stephen E.
2017-04-11
Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less
Quantum dynamics at finite temperature: Time-dependent quantum Monte Carlo study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christov, Ivan P., E-mail: ivan.christov@phys.uni-sofia.bg
2016-08-15
In this work we investigate the ground state and the dissipative quantum dynamics of interacting charged particles in an external potential at finite temperature. The recently devised time-dependent quantum Monte Carlo (TDQMC) method allows a self-consistent treatment of the system of particles together with bath oscillators first for imaginary-time propagation of Schrödinger type of equations where both the system and the bath converge to their finite temperature ground state, and next for real time calculation where the dissipative dynamics is demonstrated. In that context the application of TDQMC appears as promising alternative to the path-integral related techniques where the realmore » time propagation can be a challenge.« less
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
A Wigner Monte Carlo approach to density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.
2014-08-01
In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales verymore » well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.« less
Four decades of implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B.
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
NASA Astrophysics Data System (ADS)
E Derenzo, Stephen
2017-05-01
This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.
Monte Carlo Simulation of THz Multipliers
NASA Technical Reports Server (NTRS)
East, J.; Blakey, P.
1997-01-01
Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.
First-Order or Second-Order Kinetics? A Monte Carlo Answer
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2005-01-01
Monte Carlo computational experiments reveal that the ability to discriminate between first- and second-order kinetics from least-squares analysis of time-dependent concentration data is better than implied in earlier discussions of the problem. The problem is rendered as simple as possible by assuming that the order must be either 1 or 2 and that…
New Quantum Diffusion Monte Carlo Method for strong field time dependent problems
NASA Astrophysics Data System (ADS)
Kalinski, Matt
2017-04-01
We have recently formulated the Quantum Diffusion Quantum Monte Carlo (QDMC) method for the solution of the time-dependent Schrödinger equation when it is equivalent to the reaction-diffusion system coupled by the highly nonlinear potentials of the type of Shay. Here we formulate a new Time Dependent QDMC method free of the nonlinearities described by the constant stochastic process of the coupled diffusion with transmutation. As before two kinds of diffusing particles (color walkers) are considered but which can further also transmute one into the other. Each of the species undergoes the hypothetical Einstein random walk progression with transmutation. The progressed particles transmute into the particles of the other kind before contributing to or annihilating the other particles density. This fully emulates the Time Dependent Schrödinger equation for any number of quantum particles. The negative sign of the real and the imaginary parts of the wave function is handled by the ``spinor'' densities carrying the sign as the degree of freedom. We apply the method for the exact time-dependent observation of our discovered two-electron Langmuir configurations in the magnetic and circularly polarized fields.
Yüksel, Yusuf; Akıncı, Ümit
2016-12-07
Using Monte Carlo simulations, we have investigated the dynamic phase transition properties of magnetic nanoparticles with ferromagnetic core coated by an antiferromagnetic shell structure. Effects of field amplitude and frequency on the thermal dependence of magnetizations, magnetization reversal mechanisms during hysteresis cycles, as well as on the exchange bias and coercive fields have been examined, and the feasibility of applying dynamic magnetic fields on the particle have been discussed for technological and biomedical purposes.
1984-07-01
piecewise constant energy dependence. This is a seven-dimensional problem with time dependence, three spatial and two angular or directional variables and...in extending the computer implementation of the method to time and energy dependent problems, and to solving and validating this technique on a...problems they have severe limitations. The Monte Carlo method, usually requires the use of many hours of expensive computer time , and for deep
2016-04-01
noise, and energy relaxation for doped zinc-oxide and structured ZnO transistor materials with a 2-D electron gas (2DEG) channel subjected to a strong...function on the time delay. Closed symbols represent the Monte Carlo data with hot-phonon effect at different electron gas density: 1•1017 cm-3...Monte Carlo simulation is performed for electron gas density of 1•1018 cm-3. Figure 18. Monte Carlo simulation of density-dependent hot-electron energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
NASA Astrophysics Data System (ADS)
Cohen, Guy; Gull, Emanuel; Reichman, David R.; Millis, Andrew J.
2014-04-01
The nonequilibrium spectral properties of the Anderson impurity model with a chemical potential bias are investigated within a numerically exact real-time quantum Monte Carlo formalism. The two-time correlation function is computed in a form suitable for nonequilibrium dynamical mean field calculations. Additionally, the evolution of the model's spectral properties are simulated in an alternative representation, defined by a hypothetical but experimentally realizable weakly coupled auxiliary lead. The voltage splitting of the Kondo peak is confirmed and the dynamics of its formation after a coupling or gate quench are studied. This representation is shown to contain additional information about the dot's population dynamics. Further, we show that the voltage-dependent differential conductance gives a reasonable qualitative estimate of the equilibrium spectral function, but significant qualitative differences are found including incorrect trends and spurious temperature dependent effects.
APL-UW Deep Water Propagation 2015-2017: Philippine Sea Data Analysis
independent Monte Carlo parabolic equation simulations . The autospectrum of normalized intensity had an excellent match to that of a time-dependent Monte...ambient noise; systems along the Aleutian chain have either no significant trendor a slight increasing trend; systems in the central Pacific Ocean...At the end of the grant, it was determined that the Kauai cable had suffered a break in the shallow near-shore region. Additional contractual
Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams
NASA Astrophysics Data System (ADS)
Willow, Soohaeng Yoo; Hirata, So
2014-01-01
A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dustin Popp; Zander Mausolff; Sedat Goluoglu
We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operationmore » is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).« less
Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm
NASA Astrophysics Data System (ADS)
Gubernatis, James
2014-03-01
A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.
Advanced Monte Carlo methods for thermal radiation transport
NASA Astrophysics Data System (ADS)
Wollaber, Allan B.
During the past 35 years, the Implicit Monte Carlo (IMC) method proposed by Fleck and Cummings has been the standard Monte Carlo approach to solving the thermal radiative transfer (TRT) equations. However, the IMC equations are known to have accuracy limitations that can produce unphysical solutions. In this thesis, we explicitly provide the IMC equations with a Monte Carlo interpretation by including particle weight as one of its arguments. We also develop and test a stability theory for the 1-D, gray IMC equations applied to a nonlinear problem. We demonstrate that the worst case occurs for 0-D problems, and we extend the results to a stability algorithm that may be used for general linearizations of the TRT equations. We derive gray, Quasidiffusion equations that may be deterministically solved in conjunction with IMC to obtain an inexpensive, accurate estimate of the temperature at the end of the time step. We then define an average temperature T* to evaluate the temperature-dependent problem data in IMC, and we demonstrate that using T* is more accurate than using the (traditional) beginning-of-time-step temperature. We also propose an accuracy enhancement to the IMC equations: the use of a time-dependent "Fleck factor". This Fleck factor can be considered an automatic tuning of the traditionally defined user parameter alpha, which generally provides more accurate solutions at an increased cost relative to traditional IMC. We also introduce a global weight window that is proportional to the forward scalar intensity calculated by the Quasidiffusion method. This weight window improves the efficiency of the IMC calculation while conserving energy. All of the proposed enhancements are tested in 1-D gray and frequency-dependent problems. These enhancements do not unconditionally eliminate the unphysical behavior that can be seen in the IMC calculations. However, for fixed spatial and temporal grids, they suppress them and clearly work to make the solution more accurate. Overall, the work presented represents first steps along several paths that can be taken to improve the Monte Carlo simulations of TRT problems.
Fast and unbiased estimator of the time-dependent Hurst exponent.
Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria
2018-03-01
We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.
Fast and unbiased estimator of the time-dependent Hurst exponent
NASA Astrophysics Data System (ADS)
Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria
2018-03-01
We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.
Monte Carlo Study of Cosmic-Ray Propagation in the Galaxy and Diffuse Gamma-Ray Production
NASA Astrophysics Data System (ADS)
Huang, C.-Y.; Pohl, M.
This talk present preliminary results for the time-dependent cosmic-ray propagation in the Galaxy by a fully 3-dimensional Monte Carlo simulation. The distribution of cosmic-rays (both protons and helium nuclei) in the Galaxy is studied on various spatial scales for both constant and variable cosmic-ray sources. The continuous diffuse gamma-ray emission produced by cosmic-rays during the propagation is evaluated. The results will be compared with calculations made with other propagation models.
Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweezy, Jeremy Ed
2016-01-21
The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
A Monte Carlo Simulation of Vesicle Exocytosis in the Buffered Diffusion of Calcium Channel Currents
NASA Astrophysics Data System (ADS)
Dimcovic, Z.; Eagan, T. P.; Brown, R. W.; Petschek, R. G.; Eppell, S. J.; Yunker, A. M. R.; Sharp, A. H.; McEnery, M. W.
2001-04-01
The voltage-dependent opening of calcium channels results in an influx of calcium ions that leads to the fusion of synaptic vesicles with the cell membrane, resulting in the release of neurotransmitters. This allows nerve impulses to be transmitted from one neuron to another. A Monte Carlo model of the three-dimensional diffusion of calcium following a channel opening is employed to estimate the space and time dependence of the calcium density. The effects of fixed and mobile calcium buffers are included, and a tethered nearby vesicle is considered. The importance of the size and location of the vesicle is studied. When the vesicle is ignored, these results are compared with the analytical calculations of Naraghi and Neher and the Monte Carlo calculations of Bennett et al. The finite-vesicle-size analysis offers new insights into the process of neurosecretion. Support: NIH MH55747, AHA 96001250, NSF 0086643, and CWRU Presidential Research Initiative grants.
NASA Astrophysics Data System (ADS)
Ghosh, Karabi
2017-02-01
We briefly comment on a paper by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114] in which the Fleck factor has been modified to include the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm developed by Fleck and Cummings [1,2]. Instead of the Fleck factor, f = 1 / (1 + βcΔtσP), the author derived the modified Fleck factor g = 1 / (1 + βcΔtσP - min [σP‧ (aTr4 - aT4)cΔt/ρCV, 0 ]) to be used in the Implicit Monte Carlo (IMC) algorithm in order to obtain more accurate solutions with much larger time steps. Here β = 4 aT3 / ρCV, σP is the Planck opacity and the derivative of Planck opacity w.r.t. the material temperature is σP‧ = dσP / dT.
Anderson, David F; Yuan, Chaojie
2018-04-18
A number of coupling strategies are presented for stochastically modeled biochemical processes with time-dependent parameters. In particular, the stacked coupling is introduced and is shown via a number of examples to provide an exceptionally low variance between the generated paths. This coupling will be useful in the numerical computation of parametric sensitivities and the fast estimation of expectations via multilevel Monte Carlo methods. We provide the requisite estimators in both cases.
Organ doses from radionuclides on the ground. Part I. Simple time dependences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, P.; Paretzke, H.G.; Rosenbaum, H.
1988-06-01
Organ dose equivalents of mathematical, anthropomorphical phantoms ADAM and EVA for photon exposures from plane sources on the ground have been calculated by Monte Carlo photon transport codes and tabulated in this article. The calculation takes into account the air-ground interface and a typical surface roughness, the energy and angular dependence of the photon fluence impinging on the phantom and the time dependence of the contributions from daughter nuclides. Results are up to 35% higher than data reported in the literature for important radionuclides. This manuscript deals with radionuclides, for which the time dependence of dose equivalent rates and dosemore » equivalents may be approximated by a simple exponential. A companion manuscript treats radionuclides with non-trivial time dependences.« less
Path integral pricing of Wasabi option in the Black-Scholes model
NASA Astrophysics Data System (ADS)
Cassagnes, Aurelien; Chen, Yu; Ohashi, Hirotada
2014-11-01
In this paper, using path integral techniques, we derive a formula for a propagator arising in the study of occupation time derivatives. Using this result we derive a fair price for the case of the cumulative Parisian option. After confirming the validity of the derived result using Monte Carlo simulation, a new type of heavily path dependent derivative product is investigated. We derive an approximation for our so-called Wasabi option fair price and check the accuracy of our result with a Monte Carlo simulation.
Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
Mission Analysis, Operations, and Navigation Toolkit Environment (Monte) Version 040
NASA Technical Reports Server (NTRS)
Sunseri, Richard F.; Wu, Hsi-Cheng; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.
2012-01-01
Monte is a software set designed for use in mission design and spacecraft navigation operations. The system can process measurement data, design optimal trajectories and maneuvers, and do orbit determination, all in one application. For the first time, a single software set can be used for mission design and navigation operations. This eliminates problems due to different models and fidelities used in legacy mission design and navigation software. The unique features of Monte 040 include a blowdown thruster model for GRAIL (Gravity Recovery and Interior Laboratory) with associated pressure models, as well as an updated, optimalsearch capability (COSMIC) that facilitated mission design for ARTEMIS. Existing legacy software lacked the capabilities necessary for these two missions. There is also a mean orbital element propagator and an osculating to mean element converter that allows long-term orbital stability analysis for the first time in compiled code. The optimized trajectory search tool COSMIC allows users to place constraints and controls on their searches without any restrictions. Constraints may be user-defined and depend on trajectory information either forward or backwards in time. In addition, a long-term orbit stability analysis tool (morbiter) existed previously as a set of scripts on top of Monte. Monte is becoming the primary tool for navigation operations, a core competency at JPL. The mission design capabilities in Monte are becoming mature enough for use in project proposals as well as post-phase A mission design. Monte has three distinct advantages over existing software. First, it is being developed in a modern paradigm: object- oriented C++ and Python. Second, the software has been developed as a toolkit, which allows users to customize their own applications and allows the development team to implement requirements quickly, efficiently, and with minimal bugs. Finally, the software is managed in accordance with the CMMI (Capability Maturity Model Integration), where it has been ap praised at maturity level 3.
NASA Astrophysics Data System (ADS)
Huang, B. Y.; Lu, Z. X.; Zhang, Y.; Xie, Y. L.; Zeng, M.; Yan, Z. B.; Liu, J.-M.
2016-05-01
The polarization-electric field hysteresis loops and the dynamics of polarization switching in a two-dimensional antiferroelectric (AFE) lattice submitted to a time-oscillating electric field E(t) of frequency f and amplitude E0, is investigated using Monte Carlo simulation based on the Landau-Devonshire phenomenological theory on antiferroelectrics. It is revealed that the AFE double-loop hysteresis area A, i.e., the energy loss in one cycle of polarization switching, exhibits the single-peak frequency dispersion A(f), suggesting the unique characteristic time for polarization switching, which is independent of E0 as long as E0 is larger than the quasi-static coercive field for the antiferroelectric-ferroelectric transitions. However, the dependence of recoverable stored energy W on amplitude E0 seems to be complicated depending on temperature T and frequency f. A dynamic scaling behavior of the energy loss dispersion A(f) over a wide range of E0 is obtained, confirming the unique characteristic time for polarization switching of an AFE lattice. The present simulation may shed light on the dynamics of energy storage and release in AFE thin films.
NASA Astrophysics Data System (ADS)
Schröder, Markus; Meyer, Hans-Dieter
2017-08-01
We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.
NASA Astrophysics Data System (ADS)
Preston, M. F.; Myers, L. S.; Annand, J. R. M.; Fissum, K. G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.
2014-04-01
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.
Monte Carlo Sampling in Fractal Landscapes
NASA Astrophysics Data System (ADS)
Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.
2013-05-01
We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less
Eruption history of the Tharsis shield volcanoes, Mars
NASA Technical Reports Server (NTRS)
Plescia, J. B.
1993-01-01
The Tharsis Montes volcanoes and Olympus Mons are giant shield volcanoes. Although estimates of their average surface age have been made using crater counts, the length of time required to build the shields has not been considered. Crater counts for the volcanoes indicate the constructs are young; average ages are Amazonian to Hesperian. In relative terms; Arsia Mons is the oldest, Pavonis Mons intermediate, and Ascreaus Mons the youngest of the Tharsis Montes shield; Olympus Mons is the youngest of the group. Depending upon the calibration, absolute ages range from 730 Ma to 3100 Ma for Arsia Mons and 25 Ma to 100 Ma for Olympus Mons. These absolute chronologies are highly model dependent, and indicate only the time surficial volcanism ceased, not the time over which the volcano was built. The problem of estimating the time necessary to build the volcanoes can be attacked in two ways. First, eruption rates from terrestrial and extraterrestrial examples can be used to calculate the required period of time to build the shields. Second, some relation of eruptive activity between the volcanoes can be assumed, such as they all began at a speficic time or they were active sequentially, and calculate the eruptive rate. Volumes of the shield volcanoes were derived from topographic/volume data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullen, Dermott E.
2017-01-30
Here I attempt to explain what physically happens when we pulse an object with neutrons, specifically what we expect the time dependent behavior of the neutron population to look like. Emphasis is on the time dependent emission of both prompt and delayed neutrons. I also describe how the TART Monte Carlo transport code models this situation; see the appendix for a complete description of the model used by TART. I will also show that, as we expect, MCNP and MERCURY, produce similar results using the same delayed neutron model (again, see the appendix).
Time Dependent Tomography of the Solar Corona in Three Spatial Dimensions
NASA Astrophysics Data System (ADS)
Butala, M. D.; Frazin, R. A.; Kamalabadi, F.
2006-12-01
The combination of the soon to be launched STEREO mission with SOHO will provide scientists with three simultaneous space-borne views of the Sun. The increase in available measurements will reduce the data acquisition time necessary to obtain 3D coronal electron density (N_e) estimates from coronagraph images using a technique called solar rotational tomography (SRT). However, the data acquisition period will still be long enough for the corona to dynamically evolve, requiring time dependent solar tomography. The Kalman filter (KF) would seem to be an ideal computational method for time dependent SRT. Unfortunately, the KF scales poorly with problem size and is, as a result, inapplicable. A Monte Carlo approximation to the KF called the localized ensemble Kalman filter was developed for massive applications and has the promise of making the time dependent estimation of the 3D coronal N_e possible. We present simulations showing that this method will make time dependent tomography in three spatial dimensions computationally feasible.
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Peter, Emanuel K; Pivkin, Igor V; Shea, Joan-Emma
2015-04-14
In Monte-Carlo simulations of protein folding, pathways and folding times depend on the appropriate choice of the Monte-Carlo move or process path. We developed a generalized set of process paths for a hybrid kinetic Monte Carlo-Molecular dynamics algorithm, which makes use of a novel constant time-update and allows formation of α-helical and β-stranded secondary structures. We apply our new algorithm to the folding of 3 different proteins: TrpCage, GB1, and TrpZip4. All three systems are seen to fold within the range of the experimental folding times. For the β-hairpins, we observe that loop formation is the rate-determining process followed by collapse and formation of the native core. Cluster analysis of both peptides reveals that GB1 folds with equal likelihood along a zipper or a hydrophobic collapse mechanism, while TrpZip4 follows primarily a zipper pathway. The difference observed in the folding behavior of the two proteins can be attributed to the different arrangements of their hydrophobic core, strongly packed, and dry in case of TrpZip4, and partially hydrated in the case of GB1.
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
NASA Astrophysics Data System (ADS)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy
2018-03-01
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...
2018-03-06
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Monte Carlo simulation of electrothermal atomization on a desktop personal computer
NASA Astrophysics Data System (ADS)
Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.
1996-07-01
Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.
NASA Astrophysics Data System (ADS)
Vatansever, Erol
2017-05-01
By means of Monte Carlo simulation method with Metropolis algorithm, we elucidate the thermal and magnetic phase transition behaviors of a ferrimagnetic core/shell nanocubic system driven by a time dependent magnetic field. The particle core is composed of ferromagnetic spins, and it is surrounded by an antiferromagnetic shell. At the interface of the core/shell particle, we use antiferromagnetic spin-spin coupling. We simulate the nanoparticle using classical Heisenberg spins. After a detailed analysis, our Monte Carlo simulation results suggest that present system exhibits unusual and interesting magnetic behaviors. For example, at the relatively lower temperature regions, an increment in the amplitude of the external field destroys the antiferromagnetism in the shell part of the nanoparticle, leading to a ground state with ferromagnetic character. Moreover, particular attention has been dedicated to the hysteresis behaviors of the system. For the first time, we show that frequency dispersions can be categorized into three groups for a fixed temperature for finite core/shell systems, as in the case of the conventional bulk systems under the influence of an oscillating magnetic field.
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models
NASA Astrophysics Data System (ADS)
Mitchell, S. J.; Landau, D. P.
2006-03-01
Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).
Anomalous Growth of Aging Populations
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.
2016-04-01
We consider a discrete-time population dynamics with age-dependent structure. At every time step, one of the alive individuals from the population is chosen randomly and removed with probability q_k depending on its age, whereas a new individual of age 1 is born with probability r. The model can also describe a single queue in which the service order is random while the service efficiency depends on a customer's "age" in the queue. We propose a mean field approximation to investigate the long-time asymptotic behavior of the mean population size. The age dependence is shown to lead to anomalous power-law growth of the population at the critical regime. The scaling exponent is determined by the asymptotic behavior of the probabilities q_k at large k. The mean field approximation is validated by Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Subdiffusion kinetics of nanoprecipitate growth and destruction in solid solutions
NASA Astrophysics Data System (ADS)
Sibatov, R. T.; Svetukhin, V. V.
2015-06-01
Based on fractional differential generalizations of the Ham and Aaron-Kotler precipitation models, we study the kinetics of subdiffusion-limited growth and dissolution of new-phase precipitates. We obtain the time dependence of the number of impurities and dimensions of new-phase precipitates. The solutions agree with the Monte Carlo simulation results.
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
Statistical time-dependent model for the interstellar gas
NASA Technical Reports Server (NTRS)
Gerola, H.; Kafatos, M.; Mccray, R.
1974-01-01
We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
Modeling human tracking error in several different anti-tank systems
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1981-01-01
An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.
Hedged Monte-Carlo: low variance derivative pricing with objective probabilities
NASA Astrophysics Data System (ADS)
Potters, Marc; Bouchaud, Jean-Philippe; Sestovic, Dragan
2001-01-01
We propose a new ‘hedged’ Monte-Carlo ( HMC) method to price financial derivatives, which allows to determine simultaneously the optimal hedge. The inclusion of the optimal hedging strategy allows one to reduce the financial risk associated with option trading, and for the very same reason reduces considerably the variance of our HMC scheme as compared to previous methods. The explicit accounting of the hedging cost naturally converts the objective probability into the ‘risk-neutral’ one. This allows a consistent use of purely historical time series to price derivatives and obtain their residual risk. The method can be used to price a large class of exotic options, including those with path dependent and early exercise features.
Accuracy of Reaction Cross Section for Exotic Nuclei in Glauber Model Based on MCMC Diagnostics
NASA Astrophysics Data System (ADS)
Rueter, Keiti; Novikov, Ivan
2017-01-01
Parameters of a nuclear density distribution for an exotic nuclei with halo or skin structures can be determined from the experimentally measured reaction cross-section. In the presented work, to extract parameters such as nuclear size information for a halo and core, we compare experimental data on reaction cross-sections with values obtained using expressions of the Glauber Model. These calculations are performed using a Markov Chain Monte Carlo algorithm. We discuss the accuracy of the Monte Carlo approach and its dependence on k*, the power law turnover point in the discreet power spectrum of the random number sequence and on the lag-1 autocorrelation time of the random number sequence.
Time dependent worldwide distribution of atmospheric neutrons and of their products. I, II, III.
NASA Technical Reports Server (NTRS)
Merker, M.; Light, E. S.; Verschell, H. J.; Mendell, R. B.; Korff, S. A.
1973-01-01
Review of the experimental results obtained in a series of measurements of the fast neutron cosmic ray spectrum by means of high-altitude balloons and aircraft. These results serve as a basis for checking a Monte Carlo calculation of the entire neutron distribution and its products. A calculation of neutron production and transport in the earth's atmosphere is then discussed for the purpose of providing a detailed description of the morphology of secondary neutron components. Finally, an analysis of neutron observations during solar particle events is presented. The Monte Carlo output is used to estimate the contribution of flare particles to fluctuations in the steady state neutron distributions.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Y; Singh, H; Islam, M
2014-06-01
Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
NASA Astrophysics Data System (ADS)
Xie, Hui; Li, Min; Luo, Siqiang; Li, Yang; Zhou, Yueming; Cao, Wei; Lu, Peixiang
2017-12-01
We measure the photoelectron momentum distributions from atoms ionized by strong elliptically polarized laser fields at the wavelengths of 400 and 800 nm, respectively. The momentum distributions show distinct angular shifts, which sensitively depend on the electron energy. We find that the deflection angle with respect to the major axis of the laser ellipse decreases with the increase of the electron energy for large ellipticities. This energy-dependent angular shift is well reproduced by both numerical solutions of the time-dependent Schrödinger equation and the classical-trajectory Monte Carlo model. We show that the ionization time delays among the electrons with different energies are responsible for the energy-dependent angular shifts. On the other hand, for small ellipticities, we find the deflection angle increases with increasing the electron energy, which might be caused by electron rescattering in the elliptically polarized fields.
Automated variance reduction for MCNP using deterministic methods.
Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B
2005-01-01
In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.
Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer.
Castonguay, Thomas C; Wang, Feng
2008-03-28
In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.
Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer
NASA Astrophysics Data System (ADS)
Castonguay, Thomas C.; Wang, Feng
2008-03-01
In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.
A 3D particle Monte Carlo approach to studying nucleation
NASA Astrophysics Data System (ADS)
Köhn, Christoph; Enghoff, Martin Bødker; Svensmark, Henrik
2018-06-01
The nucleation of sulphuric acid molecules plays a key role in the formation of aerosols. We here present a three dimensional particle Monte Carlo model to study the growth of sulphuric acid clusters as well as its dependence on the ambient temperature and the initial particle density. We initiate a swarm of sulphuric acid-water clusters with a size of 0.329 nm with densities between 107 and 108 cm-3 at temperatures between 200 and 300 K and a relative humidity of 50%. After every time step, we update the position of particles as a function of size-dependent diffusion coefficients. If two particles encounter, we merge them and add their volumes and masses. Inversely, we check after every time step whether a polymer evaporates liberating a molecule. We present the spatial distribution as well as the size distribution calculated from individual clusters. We also calculate the nucleation rate of clusters with a radius of 0.85 nm as a function of time, initial particle density and temperature. The nucleation rates obtained from the presented model agree well with experimentally obtained values and those of a numerical model which serves as a benchmark of our code. In contrast to previous nucleation models, we here present for the first time a code capable of tracing individual particles and thus of capturing the physics related to the discrete nature of particles.
MCNP capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less
Harriss-Phillips, W M; Bezak, E; Yeoh, E K
2011-01-01
Objective A temporal Monte Carlo tumour growth and radiotherapy effect model (HYP-RT) simulating hypoxia in head and neck cancer has been developed and used to analyse parameters influencing cell kill during conventionally fractionated radiotherapy. The model was designed to simulate individual cell division up to 108 cells, while incorporating radiobiological effects, including accelerated repopulation and reoxygenation during treatment. Method Reoxygenation of hypoxic tumours has been modelled using randomised increments of oxygen to tumour cells after each treatment fraction. The process of accelerated repopulation has been modelled by increasing the symmetrical stem cell division probability. Both phenomena were onset immediately or after a number of weeks of simulated treatment. Results The extra dose required to control (total cell kill) hypoxic vs oxic tumours was 15–25% (8–20 Gy for 5×2 Gy per week) depending on the timing of accelerated repopulation onset. Reoxygenation of hypoxic tumours resulted in resensitisation and reduction in total dose required by approximately 10%, depending on the time of onset. When modelled simultaneously, accelerated repopulation and reoxygenation affected cell kill in hypoxic tumours in a similar manner to when the phenomena were modelled individually; however, the degree was altered, with non-additive results. Simulation results were in good agreement with standard linear quadratic theory; however, differed for more complex comparisons where hypoxia, reoxygenation as well as accelerated repopulation effects were considered. Conclusion Simulations have quantitatively confirmed the need for patient individualisation in radiotherapy for hypoxic head and neck tumours, and have shown the benefits of modelling complex and dynamic processes using Monte Carlo methods. PMID:21933980
NASA Astrophysics Data System (ADS)
Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio
2016-09-01
A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.
The Erebus Montes Debris-Apron Population: Investigation of Amazonian Landscape Evolution
NASA Astrophysics Data System (ADS)
van Gasselt, S.; Orgel, C.; Schulz, J.
2014-04-01
Lobate debris aprons are considered to be indicators for the presence of ice and water reservoirs on Mars and are therefore sensitive to climate variability. The northern hemisphere of Mars is characterized by three major populations of debris aprons (see, e.g. [12]): (1) the Tempe Terra/Mareotis Fossae region [2, 5], (2) the Deuteronilus/Protonilus Mensae [1, 4, 8], and (3) the Phlegra Montes (PM) [3]. The broader PM area can subdivided inro a number of smaller populations dispersed across parts of Arcadia Planitia (see figure 1) of which the Erebus Montes located at 180-195oE, 25-41oN form a well-confined set of features. We here focus on age and erosional characteristics of the northern Erebus Montes (see inset in figure 1). Our study makes use of panchromatic image data obtained by the High Resolution Stereo Camera (HRSC) [9, 6] onboard Mars Express and the Context Camera (CTX) [7] onboard Mars Reconnaissance Orbiter. Image data analyses are supported by digital terrain-model data derived from HRSC based stereo imaging [10] and from Mars Orbiter Laser Altimeter (MOLA) [11]. We performed detailed geologic mapping at a scale of 1:10,000 and analysed age relationships and erosion rates based on a similar approach as outlined in [5] for the northern part of the Erebus Montes. The aim of this study is to compare feature characteristics to other populations in order to assess timing and the overarching control of landforms evolution in the Martian northern hemisphere. The EM compare geologically relatively well with the Phlegra Montes in terms of individual feature morphologies. The concentration based on cluster analysis (figure 1) shows an up to 10 times higher concentration of remnants per 25 km2 area peaking at 3.4×10-3 features for Erebus Montes. Debris aprons show well-defined age signals ranging from 15 Myr up to 145 Myr. Some units even show continuous degradation implying active denudation of the Noachian to Hesperian-aged remnant massifs. Based on the current status of investigations latitudinally dependent age trends cannot be observed which is likely to be related to the small extent of the northern region. Erosion rates determined at selected remnants are comparable to the Tempe Terra region with 0.1-0.3 mm·a-1 (100-300 B) [5], depending on the model that has been used for our calculations. An explanation for such high Amazonian rates could be that much of the apron material has not been accumulated through denudation processes but by atmospheric deposition and removal of material from high-relief areas.
Morse Monte Carlo Radiation Transport Code System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one maymore » determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)« less
Simulation of atomic diffusion in the Fcc NiAl system: A kinetic Monte Carlo study
Alfonso, Dominic R.; Tafen, De Nyago
2015-04-28
The atomic diffusion in fcc NiAl binary alloys was studied by kinetic Monte Carlo simulation. The environment dependent hopping barriers were computed using a pair interaction model whose parameters were fitted to relevant data derived from electronic structure calculations. Long time diffusivities were calculated and the effect of composition change on the tracer diffusion coefficients was analyzed. These results indicate that this variation has noticeable impact on the atomic diffusivities. A reduction in the mobility of both Ni and Al is demonstrated with increasing Al content. As a result, examination of the pair interaction between atoms was carried out formore » the purpose of understanding the predicted trends.« less
NASA Astrophysics Data System (ADS)
Yao, Yao; Si, Wei; Hou, Xiaoyuan; Wu, Chang-Qin
2012-06-01
The dynamic disorder model for charge carrier transport in organic semiconductors has been extensively studied in recent years. Although it is successful on determining the value of bandlike mobility in the organic crystalline materials, the incoherent hopping, the typical transport characteristic in amorphous molecular semiconductors, cannot be described. In this work, the decoherence process is taken into account via a phenomenological parameter, say, decoherence time, and the projective and Monte Carlo method are applied for this model to determine the waiting time and thus the diffusion coefficient. It is obtained that the type of transport is changed from coherent to incoherent with a sufficiently short decoherence time, which indicates the essential role of decoherence time in determining the type of transport in organics. We have also discussed the spatial extent of carriers for different decoherence time, and the transition from delocalization (carrier resides in about 10 molecules) to localization is observed. Based on the experimental results of spatial extent, we estimate that the decoherence time in pentacene has the order of 1 ps. Furthermore, the dependence of diffusion coefficient on decoherence time is also investigated, and corresponding experiments are discussed.
Yao, Yao; Si, Wei; Hou, Xiaoyuan; Wu, Chang-Qin
2012-06-21
The dynamic disorder model for charge carrier transport in organic semiconductors has been extensively studied in recent years. Although it is successful on determining the value of bandlike mobility in the organic crystalline materials, the incoherent hopping, the typical transport characteristic in amorphous molecular semiconductors, cannot be described. In this work, the decoherence process is taken into account via a phenomenological parameter, say, decoherence time, and the projective and Monte Carlo method are applied for this model to determine the waiting time and thus the diffusion coefficient. It is obtained that the type of transport is changed from coherent to incoherent with a sufficiently short decoherence time, which indicates the essential role of decoherence time in determining the type of transport in organics. We have also discussed the spatial extent of carriers for different decoherence time, and the transition from delocalization (carrier resides in about 10 molecules) to localization is observed. Based on the experimental results of spatial extent, we estimate that the decoherence time in pentacene has the order of 1 ps. Furthermore, the dependence of diffusion coefficient on decoherence time is also investigated, and corresponding experiments are discussed.
Zarzycki, Piotr; Rosso, Kevin M
2009-06-16
Replica kinetic Monte Carlo simulations were used to study the characteristic time scales of potentiometric titration of the metal oxides and (oxy)hydroxides. The effect of surface heterogeneity and surface transformation on the titration kinetics were also examined. Two characteristic relaxation times are often observed experimentally, with the trailing slower part attributed to surface nonuniformity, porosity, polymerization, amorphization, and other dynamic surface processes induced by unbalanced surface charge. However, our simulations show that these two characteristic relaxation times are intrinsic to the proton-binding reaction for energetically homogeneous surfaces, and therefore surface heterogeneity or transformation does not necessarily need to be invoked. However, all such second-order surface processes are found to intensify the separation and distinction of the two kinetic regimes. The effect of surface energetic-topographic nonuniformity, as well dynamic surface transformation, interface roughening/smoothing were described in a statistical fashion. Furthermore, our simulations show that a shift in the point-of-zero charge is expected from increased titration speed, and the pH-dependence of the titration measurement error is in excellent agreement with experimental studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Eun Young; Lee, Choonsik; Mcguire, Lynn
Purpose: To calculate organ S values (mGy/Bq-s) and effective doses per time-integrated activity (mSv/Bq-s) for pediatric and adult family members exposed to an adult male or female patient treated with I-131 using a series of hybrid computational phantoms coupled with a Monte Carlo radiation transport technique.Methods: A series of pediatric and adult hybrid computational phantoms were employed in the study. Three different exposure scenarios were considered: (1) standing face-to-face exposures between an adult patient and pediatric or adult family phantoms at five different separation distances; (2) an adult female patient holding her newborn child, and (3) a 1-yr-old child standingmore » on the lap of an adult female patient. For the adult patient model, two different thyroid-related diseases were considered: hyperthyroidism and differentiated thyroid cancer (DTC) with corresponding internal distributions of {sup 131}I. A general purpose Monte Carlo code, MCNPX v2.7, was used to perform the Monte Carlo radiation transport.Results: The S values show a strong dependency on age and organ location within the family phantoms at short distances. The S values and effective dose per time-integrated activity from the adult female patient phantom are relatively high at shorter distances and to younger family phantoms. At a distance of 1 m, effective doses per time-integrated activity are lower than those values based on the NRC (Nuclear Regulatory Commission) by a factor of 2 for both adult male and female patient phantoms. The S values to target organs from the hyperthyroid-patient source distribution strongly depend on the height of the exposed family phantom, so that their values rapidly decrease with decreasing height of the family phantom. Active marrow of the 10-yr-old phantom shows the highest S values among family phantoms for the DTC-patient source distribution. In the exposure scenario of mother and baby, S values and effective doses per time-integrated activity to the newborn and 1-yr-old phantoms for a hyperthyroid-patient source are higher than values for a DTC-patient source.Conclusions: The authors performed realistic assessments of {sup 131}I organ S values and effective dose per time-integrated activity from adult patients treated for hyperthyroidism and DTC to family members. In addition, the authors’ studies consider Monte Carlo simulated “mother and baby/child” exposure scenarios for the first time. Based on these results, the authors reconfirm the strong conservatism underlying the point source method recommended by the US NRC. The authors recommend that various factors such as the type of the patient's disease, the age of family members, and the distance/posture between the patient and family members must be carefully considered to provide realistic dose estimates for patient-to-family exposures.« less
The effect of a hot, spherical scattering cloud on quasi-periodic oscillation behavior
NASA Astrophysics Data System (ADS)
Bussard, R. W.; Weisskopf, M. C.; Elsner, R. F.; Shibazaki, N.
1988-04-01
A Monte Carlo technique is used to investigate the effects of a hot electron scattering cloud surrounding a time-dependent X-ray source. Results are presented for the time-averaged emergent energy spectra and the mean residence time in the cloud as a function of energy. Moreover, after Fourier transforming the scattering Green's function, it is shown how the cloud affects both the observed power spectrum of a time-dependent source and the cross spectrum (Fourier transform of a cross correlation between energy bands). It is found that the power spectra intrinsic to the source are related to those observed by a relatively simple frequency-dependent multiplicative factor (a transmission function). The cloud can severely attenuate high frequencies in the power spectra, depending on optical depth, and, at lower frequencies, the transmission function has roughly a Lorentzian shape. It is also found that if the intrinsic energy spectrum is constant in time, the phase of the cross spectrum is determined entirely by scattering. Finally, the implications of the results for studies of the X-ray quasi-periodic oscillators are discussed.
Estimation of gloss from rough surface parameters
NASA Astrophysics Data System (ADS)
Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin
2005-12-01
Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.
Population extinction under bursty reproduction in a time-modulated environment
NASA Astrophysics Data System (ADS)
Vilk, Ohad; Assaf, Michael
2018-06-01
In recent years nondemographic variability has been shown to greatly affect dynamics of stochastic populations. For example, nondemographic noise in the form of a bursty reproduction process with an a priori unknown burst size, or environmental variability in the form of time-varying reaction rates, have been separately found to dramatically impact the extinction risk of isolated populations. In this work we investigate the extinction risk of an isolated population under the combined influence of these two types of nondemographic variation. Using the so-called momentum-space Wentzel-Kramers-Brillouin (WKB) approach and accounting for the explicit time dependence in the reaction rates, we arrive at a set of time-dependent Hamilton equations. To this end, we evaluate the population's extinction risk by finding the instanton of the time-perturbed Hamiltonian numerically, whereas analytical expressions are presented in particular limits using various perturbation techniques. We focus on two classes of time-varying environments: periodically varying rates corresponding to seasonal effects and a sudden decrease in the birth rate corresponding to a catastrophe. All our theoretical results are tested against numerical Monte Carlo simulations with time-dependent rates and also against a numerical solution of the corresponding time-dependent Hamilton equations.
RENEW v3.2 user's manual, maintenance estimation simulation for Space Station Freedom Program
NASA Technical Reports Server (NTRS)
Bream, Bruce L.
1993-01-01
RENEW is a maintenance event estimation simulation program developed in support of the Space Station Freedom Program (SSFP). This simulation uses reliability and maintainability (R&M) and logistics data to estimate both average and time dependent maintenance demands. The simulation uses Monte Carlo techniques to generate failure and repair times as a function of the R&M and logistics parameters. The estimates are generated for a single type of orbital replacement unit (ORU). The simulation has been in use by the SSFP Work Package 4 prime contractor, Rocketdyne, since January 1991. The RENEW simulation gives closer estimates of performance since it uses a time dependent approach and depicts more factors affecting ORU failure and repair than steady state average calculations. RENEW gives both average and time dependent demand values. Graphs of failures over the mission period and yearly failure occurrences are generated. The averages demand rate for the ORU over the mission period is also calculated. While RENEW displays the results in graphs, the results are also available in a data file for further use by spreadsheets or other programs. The process of using RENEW starts with keyboard entry of the R&M and operational data. Once entered, the data may be saved in a data file for later retrieval. The parameters may be viewed and changed after entry using RENEW. The simulation program runs the number of Monte Carlo simulations requested by the operator. Plots and tables of the results can be viewed on the screen or sent to a printer. The results of the simulation are saved along with the input data. Help screens are provided with each menu and data entry screen.
A Monte Carlo model for the internal dosimetry of choroid plexuses in nuclear medicine procedures.
Amato, Ernesto; Cicone, Francesco; Auditore, Lucrezia; Baldari, Sergio; Prior, John O; Gnesin, Silvano
2018-05-01
Choroid plexuses are vascular structures located in the brain ventricles, showing specific uptake of some diagnostic and therapeutic radiopharmaceuticals currently under clinical investigation, such as integrin-binding arginine-glycine-aspartic acid (RGD) peptides. No specific geometry for choroid plexuses has been implemented in commercially available software for internal dosimetry. The aims of the present study were to assess the dependence of absorbed dose to the choroid plexuses on the organ geometry implemented in Monte Carlo simulations, and to propose an analytical model for the internal dosimetry of these structures for 18 F, 64 Cu, 67 Cu, 68 Ga, 90 Y, 131 I and 177 Lu nuclides. A GAMOS Monte Carlo simulation based on direct organ segmentation was taken as the gold standard to validate a second simulation based on a simplified geometrical model of the choroid plexuses. Both simulations were compared with the OLINDA/EXM sphere model. The gold standard and the simplified geometrical model gave similar dosimetry results (dose difference < 3.5%), indicating that the latter can be considered as a satisfactory approximation of the real geometry. In contrast, the sphere model systematically overestimated the absorbed dose compared to both Monte Carlo models (range: 4-50% dose difference), depending on the isotope energy and organ mass. Therefore, the simplified geometric model was adopted to introduce an analytical approach for choroid plexuses dosimetry in the mass range 2-16 g. The proposed model enables the estimation of the choroid plexuses dose by a simple bi-parametric function, once the organ mass and the residence time of the radiopharmaceutical under investigation are provided. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Angular momentum evolution in dark matter haloes: a study of the Bolshoi and Millennium simulations
NASA Astrophysics Data System (ADS)
Contreras, S.; Padilla, N.; Lagos, C. D. P.
2017-12-01
We use three different cosmological dark matter simulations to study how the orientation of the angular momentum (AM) vector in dark matter haloes evolve with time. We find that haloes in this kind of simulations are constantly affected by a spurious change of mass, which translates into an artificial change in the orientation of the AM. After removing the haloes affected by artificial mass change, we found that the change in the orientation of the AM vector is correlated with time. The change in its angle and direction (i.e. the angle subtended by the AM vector in two consecutive time-steps) that affect the AM vector has a dependence on the change of mass that affects a halo, the time elapsed in which the change of mass occurs and the halo mass. We create a Monte Carlo simulation that reproduces the change of angle and direction of the AM vector. We reproduce the angular separation of the AM vector since a lookback time of 8.5 Gyr to today (α) with an accuracy of approximately 0.05 in cos(α). We are releasing this Monte Carlo simulation together with this publication. We also create a Monte Carlo simulation that reproduces the change of the AM modulus. We find that haloes in denser environments display the most dramatic evolution in their AM direction, as well as haloes with a lower specific AM modulus. These relations could be used to improve the way we follow the AM vector in low-resolution simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, J; University of Sydney, Sydney; RMIT University, Melbourne
2014-06-01
Purpose: Assess the angular dependence of the nanoDot OSLD system in MV X-ray beams at depths and mitigate this dependence for measurements in phantoms. Methods: Measurements for 6 MV photons at 3 cm and 10 cm depth and Monte Carlo simulations were performed. Two special holders were designed which allow a nanoDot dosimeter to be rotated around the center of its sensitive volume (5 mm diameter disk). The first holder positions the dosimeter disk perpendicular to the beam (en-face). It then rotates until the disk is parallel with the beam (edge on). This is referred to as Setup 1. Themore » second holder positions the disk parallel to the beam (edge on) for all angles (Setup 2). Monte Carlo simulations using GEANT4 considered detector and housing in detail based on microCT data. Results: An average drop in response by 1.4±0.7% (measurement) and 2.1±0.3% (Monte Carlo) for the 90° orientation compared to 0° was found for Setup 1. Monte Carlo simulations also showed a strong dependence of the effect on the composition of the sensitive layer. Assuming 100% active material (Al??O??) results in a 7% drop in response for 90° compared to 0°. Assuming the layer to be completely water, results in a flat response (within simulation uncertainty of about 1%). For Setup 2, measurements and Monte Carlo simulations found the angular dependence of the dosimeter to be below 1% and within the measurement uncertainty. Conclusion: The nanoDot dosimeter system exhibits a small angular dependence off approximately 2%. Changing the orientation of the dosimeter so that a coplanar beam arrangement always hits the detector material edge on reduces the angular dependence to within the measurement uncertainty of about 1%. This makes the dosimeter more attractive for phantom based clinical measurements and audits with multiple coplanar beams. The Australian Clinical Dosimetry Service is a joint initiative between the Australian Department of Health and the Australian Radiation Protection and Nuclear Safety Agency.« less
NASA Astrophysics Data System (ADS)
Shankaraiah, N.; Murthy, K. P. N.; Lookman, T.; Shenoy, S. R.
2015-06-01
Entropy barriers and aging states appear in martensitic structural-transition models, slowly re-equilibrating after temperature quenches, under Monte Carlo dynamics. Concepts from protein folding and aging harmonic oscillators turn out to be useful in understanding these nonequilibrium evolutions. We show how the athermal, nonactivated delay time for seeded parent-phase austenite to convert to product-phase martensite arises from an identified entropy barrier in Fourier space. In an aging state of low Monte Carlo acceptances, the strain structure factor makes constant-energy searches for rare pathways to enter a Brillouin zone "golf hole" enclosing negative-energy states, and to suddenly release entropically trapped stresses. In this context, a stress-dependent effective temperature can be defined, that re-equilibrates to the quenched bath temperature.
Smooth time-dependent receiver operating characteristic curve estimators.
Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos
2018-03-01
The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kadoch, Benjamin; Bos, Wouter
2017-11-01
The angle between two subsequent particle displacement increments is evaluated as a function of the time lag. The directional change of particles can thus be quantified at different scales and multiscale statistics can be performed. Flow dependent and geometry dependent features can be distinguished. The mean angle satisfies scaling behaviors for short time lags based on the smoothness of the trajectories. For intermediate time lags a power law behavior can be observed for some turbulent flows, which can be related to Kolmogorov scaling. The long time behavior depends on the confinement geometry of the flow. We show that the shape of the probability distribution function of the directional change can be well described by a Fischer distribution. Results for two-dimensional (direct and inverse cascade) and three-dimensional turbulence with and without confinement, illustrate the properties of the proposed multiscale statistics. The presented Monte-Carlo simulations allow disentangling geometry dependent and flow independent features. Finally, we also analyze trajectories of football players, which are, in general, not randomly spaced on a field.
Fundamental limits of scintillation detector timing precision
NASA Astrophysics Data System (ADS)
Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.
2014-07-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10 000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A-1/2 more than any other factor, we tabulated the parameter B, where R = BA-1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10 000 photoelectrons ns-1. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10 000 photoelectrons ns-1.
Fundamental Limits of Scintillation Detector Timing Precision
Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.
2014-01-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10,000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A−1/2 more than any other factor, we tabulated the parameter B, where R = BA−1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10,000 photoelectrons/ns. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10,000 photoelectrons/ns. PMID:24874216
A Variational Monte Carlo Approach to Atomic Structure
ERIC Educational Resources Information Center
Davis, Stephen L.
2007-01-01
The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
A Monte Carlo model for 3D grain evolution during welding
NASA Astrophysics Data System (ADS)
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-09-01
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
Distribution in energies and acceleration times in DSA, and their effect on the cut-off
NASA Astrophysics Data System (ADS)
Brooks, A.; Protheroe, R. J.
2001-08-01
We have conducted Monte Carlo simulations of diffusive shock acceleration (DSA) to determine the distribution of times since injection taken to reach energy E > E0. This distribution of acceleration times for the case of momentum dependent diffusion is compared with that given by Drury and Forman (1983) based on extrapolation of the exact result (Toptygin 1980) for the case of the diffusion coefficient being independent of momentum. As a result of this distribution we find, as suggested by Drury et al. (1999), that Monte Carlo simulations result in smoother cut-offs and pile-ups in spectra of accelerated particles than expected from simple "box model" treatments of shock acceleration (e.g., Protheroe and Stanev 1999, Drury et al. 1999). This is particularly so for the case synchrotron pile-ups, which we find are replaced by a small bump at an energy about a factor of 2 below the expected cut-off, followed by a smooth cut-off with particles extending to energies well beyond the expected cut-off energy.
Zone clearance in an infinite TASEP with a step initial condition
NASA Astrophysics Data System (ADS)
Cividini, Julien; Appert-Rolland, Cécile
2017-06-01
The TASEP is a paradigmatic model of out-of-equilibrium statistical physics, for which many quantities have been computed, either exactly or by approximate methods. In this work we study two new kinds of observables that have some relevance in biological or traffic models. They represent the probability for a given clearance zone of the lattice to be empty (for the first time) at a given time, starting from a step density profile. Exact expressions are obtained for single-time quantities, while more involved history-dependent observables are studied by Monte Carlo simulation, and partially predicted by a phenomenological approach.
Nonequilibrium critical dynamics of the two-dimensional Ashkin-Teller model at the Baxter line
NASA Astrophysics Data System (ADS)
Fernandes, H. A.; da Silva, R.; Caparica, A. A.; de Felício, J. R. Drugowich
2017-04-01
We investigate the short-time universal behavior of the two-dimensional Ashkin-Teller model at the Baxter line by performing time-dependent Monte Carlo simulations. First, as preparatory results, we obtain the critical parameters by searching the optimal power-law decay of the magnetization. Thus, the dynamic critical exponents θm and θp, related to the magnetic and electric order parameters, as well as the persistence exponent θg, are estimated using heat-bath Monte Carlo simulations. In addition, we estimate the dynamic exponent z and the static critical exponents β and ν for both order parameters. We propose a refined method to estimate the static exponents that considers two different averages: one that combines an internal average using several seeds with another, which is taken over temporal variations in the power laws. Moreover, we also performed the bootstrapping method for a complementary analysis. Our results show that the ratio β /ν exhibits universal behavior along the critical line corroborating the conjecture for both magnetization and polarization.
Electromagnetic and neutral-weak response functions of light nuclei
NASA Astrophysics Data System (ADS)
Lovato, Alessandro
2015-10-01
A major goal of nuclear theory is to understand the strong interaction in nuclei as it manifests itself in terms of two- and many-body forces among the nuclear constituents, the protons and neutrons, and the interactions of these constituents with external electroweak probes via one- and many-body currents. Using imaginary-time projection technique, quantum Monte Carlo allows for solving the time-independent Schrödinger equation even for Hamiltonians including highly spin-isospin dependent two- and three- body forces. I will present a recent Green's function Monte Carlo calculation of the quasi-elastic electroweak response functions in light nuclei, needed to describe electron and neutrino scattering. We found that meson-exchange two-body currents generate excess transverse strength from threshold to the quasielastic to the dip region and beyond. These results challenge the conventional picture of quasi elastic inclusive scattering as being largely dominated by single-nucleon knockout processes. These findings are of particular interest for the interpretation of neutrino oscillation signals.
NASA Astrophysics Data System (ADS)
Kamibayashi, Yuki; Miura, Shinichi
2016-08-01
In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction.
NASA Technical Reports Server (NTRS)
Rosenfeld, D.; Alterovitz, S. A.
1994-01-01
A theoretical study of the effects of the strain on the base properties of ungraded and compositional-graded n-p-n SiGe Heterojunction Bipolar Transistors (HBT) is presented. The dependencies of the transverse hole mobility and longitudinal electron mobility upon strain, composition and doping, are formulated using published Monte-Carlo data and, consequently, the base resistance and transit time are modeled and calculated. The results are compared to results obtained using common formulas that ignore these dependencies. The differences between the two sets of results are shown. The paper's conclusion is that for the design, analysis and optimization of high frequency SiGe HBTs the strain effects on the base properties cannot be ignored.
Time dependent variation of carrying capacity of prestressed precast beam
NASA Astrophysics Data System (ADS)
Le, Tuan D.; Konečný, Petr; Matečková, Pavlína
2018-04-01
The article deals with the evaluation of the precast concrete element time dependent carrying capacity. The variation of the resistance is inherited property of laboratory as well as in-situ members. Thus the specification of highest, yet possible, laboratory sample resistance is important with respect to evaluation of laboratory experiments based on the test machine loading capabilities. The ultimate capacity is evaluated through the bending moment resistance of a simply supported prestressed concrete beam. The probabilistic assessment is applied. Scatter of random variables of compressive strength of concrete and effective height of the cross section is considered. Monte Carlo simulation technique is used to investigate the performance of the cross section of the beam with changes of tendons’ positions and compressive strength of concrete.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
NASA Astrophysics Data System (ADS)
Jeffery, David J.; Mazzali, Paolo A.
2007-08-01
Giant steps is a technique to accelerate Monte Carlo radiative transfer in optically-thick cells (which are isotropic and homogeneous in matter properties and into which astrophysical atmospheres are divided) by greatly reducing the number of Monte Carlo steps needed to propagate photon packets through such cells. In an optically-thick cell, packets starting from any point (which can be regarded a point source) well away from the cell wall act essentially as packets diffusing from the point source in an infinite, isotropic, homogeneous atmosphere. One can replace many ordinary Monte Carlo steps that a packet diffusing from the point source takes by a randomly directed giant step whose length is slightly less than the distance to the nearest cell wall point from the point source. The giant step is assigned a time duration equal to the time for the RMS radius for a burst of packets diffusing from the point source to have reached the giant step length. We call assigning giant-step time durations this way RMS-radius (RMSR) synchronization. Propagating packets by series of giant steps in giant-steps random walks in the interiors of optically-thick cells constitutes the technique of giant steps. Giant steps effectively replaces the exact diffusion treatment of ordinary Monte Carlo radiative transfer in optically-thick cells by an approximate diffusion treatment. In this paper, we describe the basic idea of giant steps and report demonstration giant-steps flux calculations for the grey atmosphere. Speed-up factors of order 100 are obtained relative to ordinary Monte Carlo radiative transfer. In practical applications, speed-up factors of order ten and perhaps more are possible. The speed-up factor is likely to be significantly application-dependent and there is a trade-off between speed-up and accuracy. This paper and past work suggest that giant-steps error can probably be kept to a few percent by using sufficiently large boundary-layer optical depths while still maintaining large speed-up factors. Thus, giant steps can be characterized as a moderate accuracy radiative transfer technique. For many applications, the loss of some accuracy may be a tolerable price to pay for the speed-ups gained by using giant steps.
A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.
Bartzsch, Stefan; Oelfke, Uwe
2013-11-01
The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.
Effect of the scattering delay on time-dependent photon migration in turbid media.
Yaroslavsky, I V; Yaroslavsky, A N; Tuchin, V V; Schwarzmaier, H J
1997-09-01
We modified the diffusion approximation of the time-dependent radiative transfer equation to account for a finite scattering delay time. Under the usual assumptions of the diffusion approximation, the effect of the scattering delay leads to a simple renormalization of the light velocity that appears in the diffusion equation. Accuracy of the model was evaluated by comparison with Monte Carlo simulations in the frequency domain for a semi-infinite geometry. A good agreement is demonstrated for both matched and mismatched boundary conditions when the distance from the source is sufficiently large. The modified diffusion model predicts that the neglect of the scattering delay when the optical properties of the turbid material are derived from normalized frequency- or time-domain measurements should result in an underestimation of the absorption coefficient and an overestimation of the transport coefficient. These observations are consistent with the published experimental data.
PEPSI — a Monte Carlo generator for polarized leptoproduction
NASA Astrophysics Data System (ADS)
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
NASA Astrophysics Data System (ADS)
Griffin, Patrick; Rochman, Dimitri; Koning, Arjan
2017-09-01
A rigorous treatment of the uncertainty in the underlying nuclear data on silicon displacement damage metrics is presented. The uncertainty in the cross sections and recoil atom spectra are propagated into the energy-dependent uncertainty contribution in the silicon displacement kerma and damage energy using a Total Monte Carlo treatment. An energy-dependent covariance matrix is used to characterize the resulting uncertainty. A strong correlation between different reaction channels is observed in the high energy neutron contributions to the displacement damage metrics which supports the necessity of using a Monte Carlo based method to address the nonlinear nature of the uncertainty propagation.
NASA Astrophysics Data System (ADS)
Weinketz, Sieghard
1998-07-01
The reordering kinetics of a diffusion lattice-gas system of adsorbates with nearest- and next-nearest-neighbor interactions on a square lattice is studied within a dynamic Monte Carlo simulation, as it evolves towards the equilibrium from a given initial configuration, at a constant temperature. The diffusion kinetics proceeds through adsorbate hoppings to empty nearest-neighboring sites (Kawasaki dynamics). The Monte Carlo procedure allows a ``real'' time definition from the local transition rates, and the configurational entropy and internal energy can be obtained from the lattice configuration at any instant t by counting the local clusters and using the C2 approximation of the cluster variation method. These state functions are then used in their nonequilibrium form as a direct measure of reordering along the time. Different reordering processes are analyzed within this approach, presenting a rich variety of behaviors. It can also be shown that the time derivative of entropy (times temperature) is always equal to or lower than the time derivative of energy, and that the reordering path is always strongly dependent on the initial order, presenting in some cases an ``invariance'' of the entropy function to the magnitude of the interactions as far as the final order is unaltered.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.
1981-01-01
A molecular dynamics technique based upon Lennard-Jones type pair interactions is used to investigate time-dependent as well as equilibrium properties. The case study deals with systems containing Si and O atoms. In this case a more involved potential energy function (PEF) is employed and the system is simulated via a Monte-Carlo procedure. This furnishes the equilibrium properties of the system at its interfaces and surfaces as well as in the bulk.
NASA Technical Reports Server (NTRS)
Ponomarev, Artem L.; George, K.; Cucinotta, F. A.
2011-01-01
New experimental data show how chromosomal aberrations for low- and high-LET radiation are dependent on DSB repair deficiencies in wild-type, AT and NBS cells. We simulated the development of chromosomal aberrations in these cells lines in a stochastic track-structure-dependent model, in which different cells have different kinetics of DSB repair. We updated a previously formulated model of chromosomal aberrations, which was based on a stochastic Monte Carlo approach, to consider the time-dependence of DSB rejoining. The previous version of the model had an assumption that all DSBs would rejoin, and therefore we called it a time-independent model. The chromosomal-aberrations model takes into account the DNA and track structure for low- and high-LET radiations, and provides an explanation and prediction of the statistics of rare and more complex aberrations. We compared the program-simulated kinetics of DSB rejoining to the experimentally-derived bimodal exponential curves of the DSB kinetics. We scored the formation of translocations, dicentrics, acentric and centric rings, deletions, and inversions. The fraction of DSBs participating in aberrations was studied in relation to the rejoining time. Comparisons of simulated dose dependence for simple aberrations to the experimental dose-dependence for HF19, AT and NBS cells will be made.
Full-orbit and backward Monte Carlo simulation of runaway electrons
NASA Astrophysics Data System (ADS)
Del-Castillo-Negrete, Diego
2017-10-01
High-energy relativistic runaway electrons (RE) can be produced during magnetic disruptions due to electric fields generated during the thermal and current quench of the plasma. Understanding this problem is key for the safe operation of ITER because, if not avoided or mitigated, RE can severely damage the plasma facing components. In this presentation we report on RE simulation efforts centered in two complementary approaches: (i) Full orbit (6-D phase space) relativistic numerical simulations in general (integrable or chaotic) 3-D magnetic and electric fields, including radiation damping and collisions, using the recently developed particle-based Kinetic Orbit Runaway electron Code (KORC) and (ii) Backward Monte-Carlo (MC) simulations based on a recently developed efficient backward stochastic differential equations (BSDE) solver. Following a description of the corresponding numerical methods, we present applications to: (i) RE synchrotron radiation (SR) emission using KORC and (ii) Computation of time-dependent runaway probability distributions, RE production rates, and expected slowing-down and runaway times using BSDE. We study the dependence of these statistical observables on the electric and magnetic field, and the ion effective charge. SR is a key energy dissipation mechanism in the high-energy regime, and it is also extensively used as an experimental diagnostic of RE. Using KORC we study full orbit effects on SR and discuss a recently developed SR synthetic diagnostic that incorporates the full angular dependence of SR, and the location and basic optics of the camera. It is shown that oversimplifying the angular dependence of SR and/or ignoring orbit effects can significantly modify the shape and overestimate the amplitude of the spectra. Applications to DIII-D RE experiments are discussed.
Monte Carlo simulation of the full energy peak efficiency of an HPGe detector.
Khan, Waseem; Zhang, Qingmin; He, Chaohui; Saleh, Muhammad
2018-01-01
This paper presents a Monte Carlo method to obtain the full energy peak efficiency (FEPE) curve for a High Purity Germanium (HPGe) detector, as it is difficult and time-consuming to measure the FEPE curve experimentally. The Geant4 simulation toolkit was adopted to establish a detector model since detector specifications provided by the nominal manufacturer are usually insufficient to calculate the accurate efficiency of a detector. Several detector parameters were optimized. FEPE curves for a given HPGe detectors over the energy range of 59.50-1836keV were obtained and showed good agreements with those measured experimentally. FEPE dependences on detector parameters and source-detector distances were investigated. A best agreement with experimental result was achieved for a certain detector geometry and source-detector distance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Derian, R; Tokár, K; Somogyi, B; Gali, Á; Štich, I
2017-12-12
We present a time-dependent density functional theory (TDDFT) study of the optical gaps of light-emitting nanomaterials, namely, pristine and heavily B- and P-codoped silicon crystalline nanoparticles. Twenty DFT exchange-correlation functionals sampled from the best currently available inventory such as hybrids and range-separated hybrids are benchmarked against ultra-accurate quantum Monte Carlo results on small model Si nanocrystals. Overall, the range-separated hybrids are found to perform best. The quality of the DFT gaps is correlated with the deviation from Koopmans' theorem as a possible quality guide. In addition to providing a generic test of the ability of TDDFT to describe optical properties of silicon crystalline nanoparticles, the results also open up a route to benchmark-quality DFT studies of nanoparticle sizes approaching those studied experimentally.
Improved cache performance in Monte Carlo transport calculations using energy banding
NASA Astrophysics Data System (ADS)
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
Monte-Carlo simulation of defect-cluster nucleation in metals during irradiation
NASA Astrophysics Data System (ADS)
Nakasuji, Toshiki; Morishita, Kazunori; Ruan, Xiaoyong
2017-02-01
A multiscale modeling approach was applied to investigate the nucleation process of CRPs (copper rich precipitates, i.e., copper-vacancy clusters) in α-Fe containing 1 at.% Cu during irradiation. Monte-Carlo simulations were performed to investigate the nucleation process, with the rate theory equation analysis to evaluate the concentration of displacement defects, along with the molecular dynamics technique to know CRP thermal stabilities in advance. Our MC simulations showed that there is long incubation period at first, followed by a rapid growth of CRPs. The incubation period depends on irradiation conditions such as the damage rate and temperature. CRP's composition during nucleation varies with time. The copper content of CRPs shows relatively rich at first, and then becomes poorer as the precipitate size increases. A widely-accepted model of CRP nucleation process is finally proposed.
Time-varying nonstationary multivariate risk analysis using a dynamic Bayesian copula
NASA Astrophysics Data System (ADS)
Sarhadi, Ali; Burn, Donald H.; Concepción Ausín, María.; Wiper, Michael P.
2016-03-01
A time-varying risk analysis is proposed for an adaptive design framework in nonstationary conditions arising from climate change. A Bayesian, dynamic conditional copula is developed for modeling the time-varying dependence structure between mixed continuous and discrete multiattributes of multidimensional hydrometeorological phenomena. Joint Bayesian inference is carried out to fit the marginals and copula in an illustrative example using an adaptive, Gibbs Markov Chain Monte Carlo (MCMC) sampler. Posterior mean estimates and credible intervals are provided for the model parameters and the Deviance Information Criterion (DIC) is used to select the model that best captures different forms of nonstationarity over time. This study also introduces a fully Bayesian, time-varying joint return period for multivariate time-dependent risk analysis in nonstationary environments. The results demonstrate that the nature and the risk of extreme-climate multidimensional processes are changed over time under the impact of climate change, and accordingly the long-term decision making strategies should be updated based on the anomalies of the nonstationary environment.
Mobit, P N; Nahum, A E; Mayles, P
1998-08-01
A Monte Carlo simulation of the quality dependence of different TL materials, in the form of discs 3.61 mm in diameter and 0.9 mm thick, in radiotherapy photon beams relative to 60Co gamma-rays has been performed. The beam qualities ranged from 50 kV to 25 MV x-rays. The TL materials were: CaF2, CaSO4, LiF and Li2B4O7. The effects of the dopants on energy deposition in the TL material have also been determined for the highly sensitive LiF:Mg:Cu:P (TLD-100H) and for CaF2:Mn. It was found that there was a significant difference in the quality dependence factor derived from Monte Carlo simulations between LiF and LiF:Mg:Cu:P but not between CaF2 and CaF2:Mn. The quality dependence factors for Li2B4O7 varied from 0.990 +/- 0.008 (1 sd) for 25 MV x-rays to 0.940 +/- 0.009 (1 sd) for 50 kV x-rays relative to 60Co gamma-rays; Monte Carlo simulations were also performed for Li2B4O7 in megavoltage electron beams. For CaF2, the quality dependence factor varied from 0.927 +/- 0.008 (1 sd) for 25 MV x-rays to 10.561 +/- 0.008 (1 sd) for 50 kV x-rays. The figure for CaSO4 ranged from 0.943 +/- 0.008 (1 sd) for 25 MV x-rays to 9.010 +/- 0.008 (1 sd) for 50 kV x-rays. The quality dependence factor for CaF2 increases by up to 5% with depth and by up to 15% with field size for the kilovoltage x-ray beams. For LiF-TLD, however, there was no significant dependence on the field size or depth of irradiation in the kilovoltage energy range.
Thorneywork, Alice L; Rozas, Roberto E; Dullens, Roel P A; Horbach, Jürgen
2015-12-31
We compare experimental results from a quasi-two-dimensional colloidal hard sphere fluid to a Monte Carlo simulation of hard disks with small particle displacements. The experimental short-time self-diffusion coefficient D(S) scaled by the diffusion coefficient at infinite dilution, D(0), strongly depends on the area fraction, pointing to significant hydrodynamic interactions at short times in the experiment, which are absent in the simulation. In contrast, the area fraction dependence of the experimental long-time self-diffusion coefficient D(L)/D(0) is in quantitative agreement with D(L)/D(0) obtained from the simulation. This indicates that the reduction in the particle mobility at short times due to hydrodynamic interactions does not lead to a proportional reduction in the long-time self-diffusion coefficient. Furthermore, the quantitative agreement between experiment and simulation at long times indicates that hydrodynamic interactions effectively do not affect the dependence of D(L)/D(0) on the area fraction. In light of this, we discuss the link between structure and long-time self-diffusion in terms of a configurational excess entropy and do not find a simple exponential relation between these quantities for all fluid area fractions.
NASA Astrophysics Data System (ADS)
Miyajima, Shigeyuki; Shishido, Hiroaki; Narukami, Yoshito; Yoshioka, Naohito; Fujimaki, Akira; Hidaka, Mutsuo; Oikawa, Kenichi; Harada, Masahide; Oku, Takayuki; Arai, Masatoshi; Ishida, Takekazu
2017-01-01
We successfully derived the time-dependent flux of pulsed neutrons using a superconducting Nb-based current-biased kinetic inductance detector (CB-KID) with a 10B conversion layer at Japan Proton Accelerator Research Complex. Our CB-KID is a meander line made of a 40-nm-thick Nb thin film with 1 - μm line width, which is covered with a 150-nm-thick 10B conversion layer. The detector works at a temperature below 4 K. The evaluated detection efficiency of the CB-KID in this experiment is 0.23 % at the neutron energy of 25.4 meV. The time-dependent flux spectra of pulsed neutrons thus obtained are in good agreement with the results obtained by the Monte Carlo simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schröder, Markus, E-mail: Markus.Schroeder@pci.uni-heidelberg.de; Meyer, Hans-Dieter, E-mail: Hans-Dieter.Meyer@pci.uni-heidelberg.de
2014-07-21
We report energies and tunneling splittings of vibrational excited states of malonaldehyde which have been obtained using full dimensional quantum mechanical calculations. To this end we employed the multi configuration time-dependent Hartree method. The results have been obtained using a recently published potential energy surface [Y. Wang, B. J. Braams, J. M. Bowman, S. Carter, and D. P. Tew, J. Chem. Phys. 128, 224314 (2008)] which has been brought into a suitable form by a modified version of the n-mode representation which was used with two different arrangements of coordinates. The relevant terms of the expansion have been identified withmore » a Metropolis algorithm and a diffusion Monte-Carlo technique, respectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
NASA Astrophysics Data System (ADS)
Rast, S.; Fries, P. H.; Belorizky, E.; Borel, A.; Helm, L.; Merbach, A. E.
2001-10-01
The time correlation functions of the electronic spin components of a metal ion without orbital degeneracy in solution are computed. The approach is based on the numerical solution of the time-dependent Schrödinger equation for a stochastic perturbing Hamiltonian which is simulated by a Monte Carlo algorithm using discrete time steps. The perturbing Hamiltonian is quite general, including the superposition of both the static mean crystal field contribution in the molecular frame and the usual transient ligand field term. The Hamiltonian of the static crystal field can involve the terms of all orders, which are invariant under the local group of the average geometry of the complex. In the laboratory frame, the random rotation of the complex is the only source of modulation of this Hamiltonian, whereas an additional Ornstein-Uhlenbeck process is needed to describe the time fluctuations of the Hamiltonian of the transient crystal field. A numerical procedure for computing the electronic paramagnetic resonance (EPR) spectra is proposed and discussed. For the [Gd(H2O)8]3+ octa-aqua ion and the [Gd(DOTA)(H2O)]- complex [DOTA=1,4,7,10-tetrakis(carboxymethyl)-1,4,7,10-tetraazacyclo dodecane] in water, the predictions of the Redfield relaxation theory are compared with those of the Monte Carlo approach. The Redfield approximation is shown to be accurate for all temperatures and for electronic resonance frequencies at and above X-band, justifying the previous interpretations of EPR spectra. At lower frequencies the transverse and longitudinal relaxation functions derived from the Redfield approximation display significantly faster decays than the corresponding simulated functions. The practical interest of this simulation approach is underlined.
A method for radiological characterization based on fluence conversion coefficients
NASA Astrophysics Data System (ADS)
Froeschl, Robert
2018-06-01
Radiological characterization of components in accelerator environments is often required to ensure adequate radiation protection during maintenance, transport and handling as well as for the selection of the proper disposal pathway. The relevant quantities are typical the weighted sums of specific activities with radionuclide-specific weighting coefficients. Traditional methods based on Monte Carlo simulations are radionuclide creation-event based or the particle fluences in the regions of interest are scored and then off-line weighted with radionuclide production cross sections. The presented method bases the radiological characterization on a set of fluence conversion coefficients. For a given irradiation profile and cool-down time, radionuclide production cross-sections, material composition and radionuclide-specific weighting coefficients, a set of particle type and energy dependent fluence conversion coefficients is computed. These fluence conversion coefficients can then be used in a Monte Carlo transport code to perform on-line weighting to directly obtain the desired radiological characterization, either by using built-in multiplier features such as in the PHITS code or by writing a dedicated user routine such as for the FLUKA code. The presented method has been validated against the standard event-based methods directly available in Monte Carlo transport codes.
A Monte Carlo model for 3D grain evolution during welding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bezier curves, which allow formore » the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. Furthermore, the model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.« less
Dynamic Monte Carlo Simulations of Phase Ordering in Br Electrosorption on Ag(100)
NASA Astrophysics Data System (ADS)
Mitchell, S. J.; Brown, G.; Rikvold, P. A.
2000-03-01
We study the dynamics of Br electrosorption on single-crystal Ag(100) by Monte Carlo simulation. The system has a second-order phase transition from a low-coverage disordered phase at more negative potentials to a doubly degenerate c(2× 2) ordered phase at more positive potentials.(B.M. Ocko, et al.), Phys. Rev. Lett. 79, 1511 (1997). Effective lateral interactions were estimated by fitting equilibrium Monte Carlo isotherms to experiments. These are well described by nearest-neighbor exclusion and repulsive 1/r^3 interactions.(M.T.M. Koper, J. Electroanal. Chem. 450), 189 (1997). Considering adsorption/desorption and diffusion with barriers estimated from ab-initio calculations,(A. Ignaczak and J.A.N.F. Gomes, J. Electroanal. Chem. 420), 71 (1997). we simulate the time dependent Br coverage, order parameter, and x-ray scattering intensity following sudden potential steps across the phase boundary. For steps far into the ordered phase, dynamical scaling is observed. For smaller steps, the dynamics are more complicated. We also analyze hysteresis in a simulated cyclic-voltammetry experiment. Movies at http://www.scri.fsu.edu/ ~mitchell/.
A Monte Carlo model for 3D grain evolution during welding
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-08-04
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bezier curves, which allow formore » the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. Furthermore, the model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.« less
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
Han, Yong; Liu, Da-Jiang; Evans, James W
2014-08-13
Far-from-equilibrium shape and structure evolution during formation and post-assembly sintering of bimetallic nanoclusters is extremely sensitive to the periphery diffusion and intermixing kinetics. Precise characterization of the many distinct local-environment-dependent diffusion barriers is achieved for epitaxial nanoclusters using density functional theory to assess interaction energies both with atoms at adsorption sites and at transition states. Kinetic Monte Carlo simulation incorporating these barriers then captures structure evolution on the appropriate time scale for two-dimensional core-ring and intermixed Au-Ag nanoclusters on Ag(100).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yong; Liu, Da-Jiang; Evans, James W
2014-08-13
Far-from-equilibrium shape and structure evolution during formation and post-assembly sintering of bimetallic nanoclusters is extremely sensitive to the periphery diffusion and intermixing kinetics. Precise characterization of the many distinct local-environment-dependent diffusion barriers is achieved for epitaxial nanoclusters using density functional theory to assess interaction energies both with atoms at adsorption sites and at transition states. Kinetic Monte Carlo simulation incorporating these barriers then captures structure evolution on the appropriate time scale for two-dimensional core-ring and intermixed Au-Ag nanoclusters on Ag(100).
Real-time detection of fast and thermal neutrons in radiotherapy with CMOS sensors.
Arbor, Nicolas; Higueret, Stephane; Elazhar, Halima; Combe, Rodolphe; Meyer, Philippe; Dehaynin, Nicolas; Taupin, Florence; Husson, Daniel
2017-03-07
The peripheral dose distribution is a growing concern for the improvement of new external radiation modalities. Secondary particles, especially photo-neutrons produced by the accelerator, irradiate the patient more than tens of centimeters away from the tumor volume. However the out-of-field dose is still not estimated accurately by the treatment planning softwares. This study demonstrates the possibility of using a specially designed CMOS sensor for fast and thermal neutron monitoring in radiotherapy. The 14 microns-thick sensitive layer and the integrated electronic chain of the CMOS are particularly suitable for real-time measurements in γ/n mixed fields. An experimental field size dependency of the fast neutron production rate, supported by Monte Carlo simulations and CR-39 data, has been observed. This dependency points out the potential benefits of a real-time monitoring of fast and thermal neutron during beam intensity modulated radiation therapies.
NASA Astrophysics Data System (ADS)
Crevillén-García, D.; Power, H.
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Power, H.
2017-01-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974
NASA Astrophysics Data System (ADS)
Gomez-Cadenas, J. J.; Benlloch-Rodríguez, J. M.; Ferrario, P.
2017-08-01
In this paper we use detailed Monte Carlo simulations to demonstrate that liquid xenon (LXe) can be used to build a Cherenkov-based TOF-PET, with an intrinsic coincidence resolving time (CRT) in the vicinity of 10 ps. This extraordinary performance is due to three facts: a) the abundant emission of Cherenkov photons by liquid xenon; b) the fact that LXe is transparent to Cherenkov light; and c) the fact that the fastest photons in LXe have wavelengths higher than 300 nm, therefore making it possible to separate the detection of scintillation and Cherenkov light. The CRT in a Cherenkov LXe TOF-PET detector is, therefore, dominated by the resolution (time jitter) introduced by the photosensors and the electronics. However, we show that for sufficiently fast photosensors (e.g, an overall 40 ps jitter, which can be achieved by current micro-channel plate photomultipliers) the overall CRT varies between 30 and 55 ps, depending on the detection efficiency. This is still one order of magnitude better than commercial CRT devices and improves by a factor 3 the best CRT obtained with small laboratory prototypes.
Test of quantum thermalization in the two-dimensional transverse-field Ising model
Blaß, Benjamin; Rieger, Heiko
2016-01-01
We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems. PMID:27905523
ERIC Educational Resources Information Center
Vasu, Ellen Storey
1978-01-01
The effects of the violation of the assumption of normality in the conditional distributions of the dependent variable, coupled with the condition of multicollinearity upon the outcome of testing the hypothesis that the regression coefficient equals zero, are investigated via a Monte Carlo study. (Author/JKS)
Nanoshells for photothermal therapy: a Monte-Carlo based numerical study of their design tolerance
Grosges, Thomas; Barchiesi, Dominique; Kessentini, Sameh; Gréhan, Gérard; de la Chapelle, Marc Lamy
2011-01-01
The optimization of the coated metallic nanoparticles and nanoshells is a current challenge for biological applications, especially for cancer photothermal therapy, considering both the continuous improvement of their fabrication and the increasing requirement of efficiency. The efficiency of the coupling between illumination with such nanostructures for burning purposes depends unevenly on their geometrical parameters (radius, thickness of the shell) and material parameters (permittivities which depend on the illumination wavelength). Through a Monte-Carlo method, we propose a numerical study of such nanodevice, to evaluate tolerances (or uncertainty) on these parameters, given a threshold of efficiency, to facilitate the design of nanoparticles. The results could help to focus on the relevant parameters of the engineering process for which the absorbed energy is the most dependant. The Monte-Carlo method confirms that the best burning efficiency are obtained for hollow nanospheres and exhibit the sensitivity of the absorbed electromagnetic energy as a function of each parameter. The proposed method is general and could be applied in design and development of new embedded coated nanomaterials used in biomedicine applications. PMID:21698021
A generic multi-hazard and multi-risk framework and its application illustrated in a virtual city
NASA Astrophysics Data System (ADS)
Mignan, Arnaud; Euchner, Fabian; Wiemer, Stefan
2013-04-01
We present a generic framework to implement hazard correlations in multi-risk assessment strategies. We consider hazard interactions (process I), time-dependent vulnerability (process II) and time-dependent exposure (process III). Our approach is based on the Monte Carlo method to simulate a complex system, which is defined from assets exposed to a hazardous region. We generate 1-year time series, sampling from a stochastic set of events. Each time series corresponds to one risk scenario and the analysis of multiple time series allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths. Each sampled event is associated to a time of occurrence, a damage footprint and a loss footprint. The occurrence of an event depends on its rate, which is conditional on the occurrence of past events (process I, concept of correlation matrix). Damage depends on the hazard intensity and on the vulnerability of the asset, which is conditional on previous damage on that asset (process II). Losses are the product of damage and exposure value, this value being the original exposure minus previous losses (process III, no reconstruction considered). The Monte Carlo method allows for a straightforward implementation of uncertainties and for implementation of numerous interactions, which is otherwise challenging in an analytical multi-risk approach. We apply our framework to a synthetic data set, defined by a virtual city within a virtual region. This approach gives the opportunity to perform multi-risk analyses in a controlled environment while not requiring real data, which may be difficultly accessible or simply unavailable to the public. Based on the heuristic approach, we define a 100 by 100 km region where earthquakes, volcanic eruptions, fluvial floods, hurricanes and coastal floods can occur. All hazards are harmonized to a common format. We define a 20 by 20 km city, composed of 50,000 identical buildings with a fixed economic value. Vulnerability curves are defined in terms of mean damage ratio as a function of hazard intensity. All data are based on simple equations found in the literature and on other simplifications. We show the impact of earthquake-earthquake interaction and hurricane-storm surge coupling, as well as of time-dependent vulnerability and exposure, on aggregated loss curves. One main result is the emergence of low probability-high consequences (extreme) events when correlations are implemented. While the concept of virtual city can suggest the theoretical benefits of multi-risk assessment for decision support, identifying their real-world practicality will require the study of real test sites.
Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, J.S.; Brockhoff, R.C.
1994-04-01
The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and themore » Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.« less
NASA Astrophysics Data System (ADS)
Dabiri, Mohammad Taghi; Sadough, Seyed Mohammad Sajad
2018-04-01
In the free-space optical (FSO) links, atmospheric turbulence lead to scintillation in the received signal. Due to its ease of implementation, intensity modulation with direct detection (IM/DD) based on ON-OFF keying (OOK) is a popular signaling scheme in these systems. Over turbulence channel, to detect OOK symbols in a blind way, i.e., without sending pilot symbols, an expectation-maximization (EM)-based detection method was recently proposed in the literature related to free-space optical (FSO) communication. However, the performance of EM-based detection methods severely depends on the length of the observation interval (Ls). To choose the optimum values of Ls at target bit error rates (BER)s of FSO communications which are commonly lower than 10-9, Monte-Carlo simulations would be very cumbersome and require a very long processing time. To facilitate performance evaluation, in this letter we derive the analytic expressions for BER and outage probability. Numerical results validate the accuracy of our derived analytic expressions. Our results may serve to evaluate the optimum value for Ls without resorting to time-consuming Monte-Carlo simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khromov, K. Yu.; Vaks, V. G., E-mail: vaks@mbslab.kiae.ru; Zhuravlev, I. A.
2013-02-15
The previously developed ab initio model and the kinetic Monte Carlo method (KMCM) are used to simulate precipitation in a number of iron-copper alloys with different copper concentrations x and temperatures T. The same simulations are also made using an improved version of the previously suggested stochastic statistical method (SSM). The results obtained enable us to make a number of general conclusions about the dependences of the decomposition kinetics in Fe-Cu alloys on x and T. We also show that the SSM usually describes the precipitation kinetics in good agreement with the KMCM, and using the SSM in conjunction withmore » the KMCM allows extending the KMC simulations to the longer evolution times. The results of simulations seem to agree with available experimental data for Fe-Cu alloys within statistical errors of simulations and the scatter of experimental results. Comparison of simulation results with experiments for some multicomponent Fe-Cu-based alloys allows making certain conclusions about the influence of alloying elements in these alloys on the precipitation kinetics at different stages of evolution.« less
NASA Astrophysics Data System (ADS)
Lin, J. Y. Y.; Aczel, A. A.; Abernathy, D. L.; Nagler, S. E.; Buyers, W. J. L.; Granroth, G. E.
2014-04-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of-flight chopper spectrometers [A. A. Aczel et al., Nat. Commun. 3, 1124 (2012), 10.1038/ncomms2117]. These modes are well described by three-dimensional isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states, and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature-dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T dependence of the scattering from these modes is strongly influenced by the uranium lattice.
A new class of accelerated kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulatov, V V; Oppelstrup, T; Athenes, M
2011-11-30
Kinetic (aka dynamic) Monte Carlo (KMC) is a powerful method for numerical simulations of time dependent evolution applied in a wide range of contexts including biology, chemistry, physics, nuclear sciences, financial engineering, etc. Generally, in a KMC the time evolution takes place one event at a time, where the sequence of events and the time intervals between them are selected (or sampled) using random numbers. While details of the method implementation vary depending on the model and context, there exist certain common issues that limit KMC applicability in almost all applications. Among such is the notorious 'flicker problem' where themore » same states of the systems are repeatedly visited but otherwise no essential evolution is observed. In its simplest form the flicker problem arises when two states are connected to each other by transitions whose rates far exceed the rates of all other transitions out of the same two states. In such cases, the model will endlessly hop between the two states otherwise producing no meaningful evolution. In most situation of practical interest, the trapping cluster includes more than two states making the flicker somewhat more difficult to detect and to deal with. Several methods have been proposed to overcome or mitigate the flicker problem, exactly [1-3] or approximately [4,5]. Of the exact methods, the one proposed by Novotny [1] is perhaps most relevant to our research. Novotny formulates the problem of escaping from a trapping cluster as a Markov system with absorbing states. Given an initial state inside the cluster, it is in principle possible to solve the Master Equation for the time dependent probabilities to find the walker in a given state (transient or absorbing) of the cluster at any time in the future. Novotny then proceeds to demonstrate implementation of his general method to trapping clusters containing the initial state plus one or two transient states and all of their absorbing states. Similar methods have been subsequently proposed in [refs] but applied in a different context. The most serious deficiency of the earlier methods is that size of the trapping cluster size is fixed and often too small to bring substantial simulation speedup. Furthermore, the overhead associated with solving for the probability distribution on the trapping cluster sometimes makes such simulations less efficient than the standard KMC. Here we report on a general and exact accelerated kinetic Monte Carlo algorithm generally applicable to arbitrary Markov models1. Two different implementations are attempted both based on incremental expansion of trapping sub-set of Markov states: (1) numerical solution of the Master Equation with absorbing states and (2) incremental graph reduction followed by randomization. Of the two implementations, the 2nd one performs better allowing, for the first time, to overcome trapping basins spanning several million Markov states. The new method is used for simulations of anomalous diffusion on a 2D substrate and of the kinetics of diffusive 1st order phase transformations in binary alloys. Depending on temperature and (alloy) super-saturation conditions, speedups of 3 to 7 orders of magnitude are demonstrated, with no compromise of simulation accuracy.« less
Numerically Exact Long Time Magnetization Dynamics Near the Nonequilibrium Kondo Regime
NASA Astrophysics Data System (ADS)
Cohen, Guy; Gull, Emanuel; Reichman, David; Millis, Andrew; Rabani, Eran
2013-03-01
The dynamical and steady-state spin response of the nonequilibrium Anderson impurity model to magnetic fields, bias voltages, and temperature is investigated by a numerically exact method which allows access to unprecedentedly long times. The method is based on using real, continuous time bold Monte Carlo techniques--quantum Monte Carlo sampling of diagrammatic corrections to a partial re-summation--in order to compute the kernel of a memory function, which is then used to determine the reduced density matrix. The method owes its effectiveness to the fact that the memory kernel is dominated by relatively short-time properties even when the system's dynamics are long-ranged. We make predictions regarding the non-monotonic temperature dependence of the system at high bias voltage and the oscillatory quench dynamics at high magnetic fields. We also discuss extensions of the method to the computation of transport properties and correlation functions, and its suitability as an impurity solver free from the need for analytical continuation in the context of dynamical mean field theory. This work is supported by the US Department of Energy under grant DE-SC0006613, by NSF-DMR-1006282 and by the US-Israel Binational Science Foundation. GC is grateful to the Yad Hanadiv-Rothschild Foundation for the award of a Rothschild Fellowship.
Monte Carlo grain growth modeling with local temperature gradients
NASA Astrophysics Data System (ADS)
Tan, Y.; Maniatty, A. M.; Zheng, C.; Wen, J. T.
2017-09-01
This work investigated the development of a Monte Carlo (MC) simulation approach to modeling grain growth in the presence of non-uniform temperature field that may vary with time. We first scale the MC model to physical growth processes by fitting experimental data. Based on the scaling relationship, we derive a grid site selection probability (SSP) function to consider the effect of a spatially varying temperature field. The SSP function is based on the differential MC step, which allows it to naturally consider time varying temperature fields too. We verify the model and compare the predictions to other existing formulations (Godfrey and Martin 1995 Phil. Mag. A 72 737-49 Radhakrishnan and Zacharia 1995 Metall. Mater. Trans. A 26 2123-30) in simple two-dimensional cases with only spatially varying temperature fields, where the predicted grain growth in regions of constant temperature are expected to be the same as for the isothermal case. We also test the model in a more realistic three-dimensional case with a temperature field varying in both space and time, modeling grain growth in the heat affected zone of a weld. We believe the newly proposed approach is promising for modeling grain growth in material manufacturing processes that involves time-dependent local temperature gradient.
Theory for the solvation of nonpolar solutes in water
NASA Astrophysics Data System (ADS)
Urbic, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.
2007-11-01
We recently developed an angle-dependent Wertheim integral equation theory (IET) of the Mercedes-Benz (MB) model of pure water [Silverstein et al., J. Am. Chem. Soc. 120, 3166 (1998)]. Our approach treats explicitly the coupled orientational constraints within water molecules. The analytical theory offers the advantage of being less computationally expensive than Monte Carlo simulations by two orders of magnitude. Here we apply the angle-dependent IET to studying the hydrophobic effect, the transfer of a nonpolar solute into MB water. We find that the theory reproduces the Monte Carlo results qualitatively for cold water and quantitatively for hot water.
Theory for the solvation of nonpolar solutes in water.
Urbic, T; Vlachy, V; Kalyuzhnyi, Yu V; Dill, K A
2007-11-07
We recently developed an angle-dependent Wertheim integral equation theory (IET) of the Mercedes-Benz (MB) model of pure water [Silverstein et al., J. Am. Chem. Soc. 120, 3166 (1998)]. Our approach treats explicitly the coupled orientational constraints within water molecules. The analytical theory offers the advantage of being less computationally expensive than Monte Carlo simulations by two orders of magnitude. Here we apply the angle-dependent IET to studying the hydrophobic effect, the transfer of a nonpolar solute into MB water. We find that the theory reproduces the Monte Carlo results qualitatively for cold water and quantitatively for hot water.
2016-10-14
We introduce new Monte Carlo methods to quantify errors in our inversions arising from Gaussian time-dependent changes in the external field and the...all study areas; Appendix A shows de- ails of magnetic inversions for all these areas (see Sections 2.3 and .4 ). Supplementary Appendix B shows maps...of the total field for ll available days that were considered, but not used. .3. Inversion algorithm 1: defined dipoles, constant magnetization DD
Bayesian explorations of fault slip evolution over the earthquake cycle
NASA Astrophysics Data System (ADS)
Duputel, Z.; Jolivet, R.; Benoit, A.; Gombert, B.
2017-12-01
The ever-increasing amount of geophysical data continuously opens new perspectives on fundamental aspects of the seismogenic behavior of active faults. In this context, the recent fleet of SAR satellites including Sentinel-1 and COSMO-SkyMED permits the use of InSAR for time-dependent slip modeling with unprecedented resolution in time and space. However, existing time-dependent slip models rely on spatial smoothing regularization schemes, which can produce unrealistically smooth slip distributions. In addition, these models usually do not include uncertainty estimates thereby reducing the utility of such estimates. Here, we develop an entirely new approach to derive probabilistic time-dependent slip models. This Markov-Chain Monte Carlo method involves a series of transitional steps to predict and update posterior Probability Density Functions (PDFs) of slip as a function of time. We assess the viability of our approach using various slow-slip event scenarios. Using a dense set of SAR images, we also use this method to quantify the spatial distribution and temporal evolution of slip along a creeping segment of the North Anatolian Fault. This allows us to track a shallow aseismic slip transient lasting for about a month with a maximum slip of about 2 cm.
Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.
Berry, K J; Mielke, P W
2000-12-01
Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey
Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.
2016-01-01
We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.
Mesoscopic structure of neuronal tracts from time-dependent diffusion
Burcaw, Lauren M.; Fieremans, Els; Novikov, Dmitry S.
2015-01-01
Interpreting brain diffusion MRI measurements in terms of neuronal structure at a micrometer level is an exciting unresolved problem. Here we consider diffusion transverse to a bundle of fibers, and show theoretically, as well as using Monte Carlo simulations and measurements in a phantom made of parallel fibers mimicking axons, that the time dependent diffusion coefficient approaches its macroscopic limit slowly, in a (lnt)/t fashion. The logarithmic singularity arises due to short range disorder in the fiber packing. We identify short range disorder in axonal fibers based on histological data from the splenium, and argue that the time dependent contribution to the overall diffusion coefficient from the extra-axonal water dominates that of the intra-axonal water. This dominance may explain the bias in measuring axon diameters in clinical settings. The short range disorder is also reflected in the linear frequency dependence of the diffusion coefficient measured with oscillating gradients, in agreement with recent experiments. Our results relate the measured diffusion to the mesoscopic structure of neuronal tissue, uncovering the sensitivity of diffusion metrics to axonal arrangement within a fiber tract, and providing an alternative interpretation of axonal diameter mapping techniques. PMID:25837598
Mesoscopic structure of neuronal tracts from time-dependent diffusion.
Burcaw, Lauren M; Fieremans, Els; Novikov, Dmitry S
2015-07-01
Interpreting brain diffusion MRI measurements in terms of neuronal structure at a micrometer level is an exciting unresolved problem. Here we consider diffusion transverse to a bundle of fibers, and show theoretically, as well as using Monte Carlo simulations and measurements in a phantom made of parallel fibers mimicking axons, that the time dependent diffusion coefficient approaches its macroscopic limit slowly, in a (ln t)/t fashion. The logarithmic singularity arises due to short range disorder in the fiber packing. We identify short range disorder in axonal fibers based on histological data from the splenium, and argue that the time dependent contribution to the overall diffusion coefficient from the extra-axonal water dominates that of the intra-axonal water. This dominance may explain the bias in measuring axon diameters in clinical settings. The short range disorder is also reflected in the asymptotically linear frequency dependence of the diffusion coefficient measured with oscillating gradients, in agreement with recent experiments. Our results relate the measured diffusion to the mesoscopic structure of neuronal tissue, uncovering the sensitivity of diffusion metrics to axonal arrangement within a fiber tract, and providing an alternative interpretation of axonal diameter mapping techniques. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Verdecchia, A.; Deng, K.; Harrington, R. M.; Liu, Y.
2017-12-01
It is broadly accepted that large variations of water level in reservoirs may affect the stress state on nearby faults. While most studies consider the relationship between lake impoundment and the occurrence of large earthquakes or seismicity rate increases in the surrounding region, very few examples focus on the effects of lake drainage. The second largest reservoir in Europe, Lake Campotosto, is located on the hanging wall of the Monte Gorzano fault, an active normal fault responsible for at least two M ≥ 6 earthquakes in historical times. The northern part of this fault ruptured during the August 24, 2016, Mw 6.0 Amatrice earthquake, increasing the probability for a future large event on the southern section where an aftershock sequence is still ongoing. The proximity of the Campotosto reservoir to the active fault aroused general concern with respect to the stability of the three dams bounding the reservoir if the southern part of the Monte Gorzano fault produces a moderate earthquake. Local officials have proposed draining the reservoir as hazard mitigation strategy to avoid possible future catastrophes. In efforts to assess how draining the reservoir might affect earthquake nucleation on the fault, we use a finite-element poroelastic model to calculate the evolution of stress and pore pressure in terms of Coulomb stress changes that would be induced on the Monte Gorzano fault by emptying the Lake Campotosto reservoir. Preliminary results show that an instantaneous drainage of the lake will produce positive Coulomb stress changes, mostly on the shallower part of the fault (0 to 2 km), while a stress drop of the order of 0.2 bar is expected on the Monte Gorzano fault between 0 and 8 km depth. Earthquake hypocenters on the southern portion of the fault currently nucleate between 5 and 13 km depth, with activity distributed nearby the reservoir. Upcoming work will model the effects of varying fault geometry and elastic parameters, including geological layering. In addition, we will consider more realistic unloading strategies to test the time-dependent stress and pore pressure changes on the fault.
NASA Astrophysics Data System (ADS)
Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh
2014-06-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Patti, Alessandro; Cuetos, Alejandro
2012-07-01
We report on the diffusion of purely repulsive and freely rotating colloidal rods in the isotropic, nematic, and smectic liquid crystal phases to probe the agreement between Brownian and Monte Carlo dynamics under the most general conditions. By properly rescaling the Monte Carlo time step, being related to any elementary move via the corresponding self-diffusion coefficient, with the acceptance rate of simultaneous trial displacements and rotations, we demonstrate the existence of a unique Monte Carlo time scale that allows for a direct comparison between Monte Carlo and Brownian dynamics simulations. To estimate the validity of our theoretical approach, we compare the mean square displacement of rods, their orientational autocorrelation function, and the self-intermediate scattering function, as obtained from Brownian dynamics and Monte Carlo simulations. The agreement between the results of these two approaches, even under the condition of heterogeneous dynamics generally observed in liquid crystalline phases, is excellent.
An improved random walk algorithm for the implicit Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keady, Kendra P., E-mail: keadyk@lanl.gov; Cleveland, Mathew A.
In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in “fully-gray” form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities aremore » a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2–4 compared to standard RW, and a factor of ∼3–6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.« less
Optimal boarding method for airline passengers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; /Fermilab
2008-02-01
Using a Markov Chain Monte Carlo optimization algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The optimal boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the optimal boarding method andmore » discuss practical modifications to the optimal. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.« less
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Bayesian Inference for Time Trends in Parameter Values using Weighted Evidence Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. L. Kelly; A. Malkhasyan
2010-09-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “in-dustry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an applica-tion of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates an approach to incorporating multiple sources of data via applicability weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly; Albert Malkhasyan
2010-06-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
Magnetic response of a disordered binary ferromagnetic alloy to an oscillating magnetic field
NASA Astrophysics Data System (ADS)
Vatansever, Erol; Polat, Hamza
2015-08-01
By means of Monte Carlo simulation with local spin update Metropolis algorithm, we have elucidated non-equilibrium phase transition properties and stationary-state treatment of a disordered binary ferromagnetic alloy of the type ApB1-p on a square lattice. After a detailed analysis, we have found that the system shows many interesting and unusual thermal and magnetic behaviors, for instance, the locations of dynamic phase transition points change significantly depending upon amplitude and period of the external magnetic field as well as upon the active concentration of A-type components. Much effort has also been dedicated to clarify the hysteresis tools, such as coercivity, dynamic loop area as well as dynamic correlations between time dependent magnetizations and external time dependent applied field as a functions of period and amplitude of field as well as active concentration of A-type components, and outstanding physical findings have been reported in order to better understand the dynamic process underlying present system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less
NASA Technical Reports Server (NTRS)
Summanen, T.; Kyroelae, E.
1995-01-01
We have developed a computer code which can be used to study 3-dimensional and time-dependent effects of the solar cycle on the interplanetary (IP) hydrogen distribution. The code is based on the inverted Monte Carlo simulation. In this work we have modelled the temporal behaviour of the solar ionisation rate. We have assumed that during the most of the time of the solar cycle there is an anisotopic latitudinal structure but right at the solar maximum the anisotropy disappears. The effects of this behaviour will be discussed both in regard to the IP hydrogen distribution and IP Lyman a a-intensity.
NASA Astrophysics Data System (ADS)
David-Uraz, A.; Moffat, A. F. J.; Chené, A.-N.; MOST Collaboration
2012-12-01
The WR + O binary CV Ser has been a source of mystery since it was shown that its atmospheric eclipses change with time over decades, in addition to its sporadic dust production. However, the first high-precision time-dependent photometric observations obtained with the MOST space telescope in 2009 show two consecutive eclipses over the 29 day orbit, with varying depths. A subsequent MOST run in 2010 showed a somewhat asymmetric eclipse profile. Parallel optical spectroscopy was obtained from the Observatoire du Mont-Mégantic (2009 and 2010) and from the Dominion Astrophysical Observatory (2009).
Optimization of fiber-optic evanescent wave spectroscopy: a Monte Carlo approach.
Mann, M P; Mark, S; Raichlin, Y; Katzir, A; Mordechai, S
2009-09-01
The absorbance of the evanescent waves of infrared radiation transmitted through an optical fiber depends on the geometry of the fiber in addition to the wavelength of the electromagnetic radiation. The signal can thus be enhanced by flattening the midsection of the fiber. While the dependence of the absorbance on the thickness of the midsection has already been studied and experimented upon, we demonstrate that similar results are obtained using Monte Carlo methods based simply on geometrical optics, given the dimensions of the fiber and the power distribution of the fired rays. The optimization can be extended to fibers with more complex geometries of the sensor.
Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.
Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark
2010-05-01
We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.
Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference
Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.
2016-01-01
Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243
2015-01-01
Many commonly used coarse-grained models for proteins are based on simplified interaction sites and consequently may suffer from significant limitations, such as the inability to properly model protein secondary structure without the addition of restraints. Recent work on a benzene fluid (LettieriS.; ZuckermanD. M.J. Comput. Chem.2012, 33, 268−27522120971) suggested an alternative strategy of tabulating and smoothing fully atomistic orientation-dependent interactions among rigid molecules or fragments. Here we report our initial efforts to apply this approach to the polar and covalent interactions intrinsic to polypeptides. We divide proteins into nearly rigid fragments, construct distance and orientation-dependent tables of the atomistic interaction energies between those fragments, and apply potential energy smoothing techniques to those tables. The amount of smoothing can be adjusted to give coarse-grained models that range from the underlying atomistic force field all the way to a bead-like coarse-grained model. For a moderate amount of smoothing, the method is able to preserve about 70–90% of the α-helical structure while providing a factor of 3–10 improvement in sampling per unit computation time (depending on how sampling is measured). For a greater amount of smoothing, multiple folding–unfolding transitions of the peptide were observed, along with a factor of 10–100 improvement in sampling per unit computation time, although the time spent in the unfolded state was increased compared with less smoothed simulations. For a β hairpin, secondary structure is also preserved, albeit for a narrower range of the smoothing parameter and, consequently, for a more modest improvement in sampling. We have also applied the new method in a “resolution exchange” setting, in which each replica runs a Monte Carlo simulation with a different degree of smoothing. We obtain exchange rates that compare favorably to our previous efforts at resolution exchange (LymanE.; ZuckermanD. M.J. Chem. Theory Comput.2006, 2, 656−666). PMID:25400525
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.
NASA Astrophysics Data System (ADS)
Olafsdottir, Kristin B.; Mudelsee, Manfred
2013-04-01
Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
Satake, S; Park, J-K; Sugama, H; Kanno, R
2011-07-29
Neoclassical toroidal viscosities (NTVs) in tokamaks are investigated using a δf Monte Carlo simulation, and are successfully verified with a combined analytic theory over a wide range of collisionality. A Monte Carlo simulation has been required in the study of NTV since the complexities in guiding-center orbits of particles and their collisions cannot be fully investigated by any means of analytic theories alone. Results yielded the details of the complex NTV dependency on particle precessions and collisions, which were predicted roughly in a combined analytic theory. Both numerical and analytic methods can be utilized and extended based on these successful verifications.
Modeling the Flow of Rarefied Gases at NASA
NASA Technical Reports Server (NTRS)
Forrest E. Lumpkin, III
2012-01-01
At modest temperatures, the thermal energy of atmospheric diatomic gases such as nitrogen is primarily distributed between only translational and rotational energy modes. Furthermore, these energy modes are fully excited such that the specific heat at constant volume is well approximated by the simple expression C(sub v) = 5/2 R. As a result, classical mechanics provides a suitable approximation at such temperatures of the true quantum mechanical behavior of the inter-molecular collisions of such molecules. Using classical mechanics, the transfer of energy between rotational and translation energy modes is studied. The approach of Lordi and Mates is adopted to compute the trajectories and time dependent rotational orientations and energies during the collision of two non-polar diatomic molecules. A Monte-Carlo analysis is performed collecting data from the results of many such simulations in order to estimate the rotational relaxation time. A Graphical Processing Unit (GPU) is employed to improve the performance of the Monte-Carlo analysis. A comparison of the performance of the GPU implementation to an implementation on traditional computer architecture is made. Effects of the assumed inter-molecular potential on the relaxation time are studied. The seminar will also present highlights of computational analyses performed at NASA Johnson Space Center of heat transfer in rarefied gases.
A Monte Carlo study of fluorescence generation probability in a two-layered tissue model
NASA Astrophysics Data System (ADS)
Milej, Daniel; Gerega, Anna; Wabnitz, Heidrun; Liebert, Adam
2014-03-01
It was recently reported that the time-resolved measurement of diffuse reflectance and/or fluorescence during injection of an optical contrast agent may constitute a basis for a technique to assess cerebral perfusion. In this paper, we present results of Monte Carlo simulations of the propagation of excitation photons and tracking of fluorescence photons in a two-layered tissue model mimicking intra- and extracerebral tissue compartments. Spatial 3D distributions of the probability that the photons were converted from excitation to emission wavelength in a defined voxel of the medium (generation probability) during their travel between source and detector were obtained for different optical properties in intra- and extracerebral tissue compartments. It was noted that the spatial distribution of the generation probability depends on the distribution of the fluorophore in the medium and is influenced by the absorption of the medium and of the fluorophore at excitation and emission wavelengths. Simulations were also carried out for realistic time courses of the dye concentration in both layers. The results of the study show that the knowledge of the absorption properties of the medium at excitation and emission wavelengths is essential for the interpretation of the time-resolved fluorescence signals measured on the surface of the head.
Use of Existing CAD Models for Radiation Shielding Analysis
NASA Technical Reports Server (NTRS)
Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.
2015-01-01
The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.
RNA folding kinetics using Monte Carlo and Gillespie algorithms.
Clote, Peter; Bayegan, Amir H
2018-04-01
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .
Time-dependent radiation dose simulations during interplanetary space flights
NASA Astrophysics Data System (ADS)
Dobynde, Mikhail; Shprits, Yuri; Drozdov, Alexander; Hoffman, Jeffrey; Li, Ju
2016-07-01
Space radiation is one of the main concerns in planning long-term interplanetary human space missions. There are two main types of hazardous radiation - Solar Energetic Particles (SEP) and Galactic Cosmic Rays (GCR). Their intensities and evolution depend on the solar activity. GCR activity is most enhanced during solar minimum, while the most intense SEPs usually occur during the solar maximum. SEPs are better shielded with thick shields, while GCR dose is less behind think shields. Time and thickness dependences of the intensity of these two components encourage looking for a time window of flight, when radiation intensity and dose of SEP and GCR would be minimized. In this study we combine state-of-the-art space environment models with GEANT4 simulations to determine the optimal shielding, geometry of the spacecraft, and launch time with respect to the phase of the solar cycle. The radiation environment was described by the time-dependent GCR model, and the SEP spectra that were measured during the period from 1990 to 2010. We included gamma rays, electrons, neutrons and 27 fully ionized elements from hydrogen to nickel. We calculated the astronaut's radiation doses during interplanetary flights using the Monte-Carlo code that accounts for the primary and the secondary radiation. We also performed sensitivity simulations for the assumed spacecraft size and thickness to find an optimal shielding. In conclusion, we present the dependences of the radiation dose as a function of launch date from 1990 to 2010, for flight durations of up to 3 years.
NASA Astrophysics Data System (ADS)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.
Crossing trend analysis methodology and application for Turkish rainfall records
NASA Astrophysics Data System (ADS)
Şen, Zekâi
2018-01-01
Trend analyses are the necessary tools for depicting possible general increase or decrease in a given time series. There are many versions of trend identification methodologies such as the Mann-Kendall trend test, Spearman's tau, Sen's slope, regression line, and Şen's innovative trend analysis. The literature has many papers about the use, cons and pros, and comparisons of these methodologies. In this paper, a completely new approach is proposed based on the crossing properties of a time series. It is suggested that the suitable trend from the centroid of the given time series should have the maximum number of crossings (total number of up-crossings or down-crossings). This approach is applicable whether the time series has dependent or independent structure and also without any dependence on the type of the probability distribution function. The validity of this method is presented through extensive Monte Carlo simulation technique and its comparison with other existing trend identification methodologies. The application of the methodology is presented for a set of annual daily extreme rainfall time series from different parts of Turkey and they have physically independent structure.
Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo
NASA Astrophysics Data System (ADS)
Qin, Junsong; Liu, Bingyi; Niu, Dongxiao
By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.
Corrected Implicit Monte Carlo
Cleveland, Mathew Allen; Wollaber, Allan Benton
2018-01-02
Here in this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle formore » frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. Finally, we present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.« less
Stoica, Grigoreta M.; Stoica, Alexandru Dan; An, Ke; ...
2014-11-28
The problem of calculating the inverse pole figure (IPF) is analyzed from the perspective of the application of time-of flight neutron diffraction toin situmonitoring of the thermomechanical behavior of engineering materials. On the basis of a quasi-Monte Carlo (QMC) method, a consistent set of grain orientations is generated and used to compute the weighting factors for IPF normalization. The weighting factors are instrument dependent and were calculated for the engineering materials diffractometer VULCAN (Spallation Neutron Source, Oak Ridge National Laboratory). The QMC method is applied to face-centered cubic structures and can be easily extended to other crystallographic symmetries. Examples includemore » 316LN stainless steelin situloaded in tension at room temperature and an Al–2%Mg alloy, substantially deformed by cold rolling and in situannealed up to 653 K.« less
Corrected implicit Monte Carlo
NASA Astrophysics Data System (ADS)
Cleveland, M. A.; Wollaber, A. B.
2018-04-01
In this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle for frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. We present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.
NASA Astrophysics Data System (ADS)
Prudnikov, V. V.; Prudnikov, P. V.; Popov, I. S.
2018-03-01
A Monte Carlo numerical simulation of the specific features of nonequilibrium critical behavior is carried out for the two-dimensional structurally disordered XY model during its evolution from a low-temperature initial state. On the basis of the analysis of the two-time dependence of autocorrelation functions and dynamic susceptibility for systems with spin concentrations of p = 1.0, 0.9, and 0.6, aging phenomena characterized by a slowing down of the relaxation system with increasing waiting time and the violation of the fluctuation-dissipation theorem (FDT) are revealed. The values of the universal limiting fluctuation-dissipation ratio (FDR) are obtained for the systems considered. As a result of the analysis of the two-time scaling dependence for spin-spin and connected spin autocorrelation functions, it is found that structural defects lead to subaging phenomena in the behavior of the spin-spin autocorrelation function and superaging phenomena in the behavior of the connected spin autocorrelation function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, J. Y. Y.; Aczel, Adam A; Abernathy, Douglas L
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accountingmore » for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.« less
Yeo, Sang Chul; Lo, Yu Chieh; Li, Ju; Lee, Hyuck Mo
2014-10-07
Ammonia (NH3) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (Eb) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (Eb) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH3 nitridation rate on the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH3 nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH3 nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH3 nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.
Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.
Stierstorfer, Karl
2018-01-01
To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atayan, M.R.; Gulkanyan, H.; Bai Yuting
Rapidity, azimuthal and multiplicity dependence of mean transverse momentum and transverse momentum correlations of charged particles is studied in {pi}{sup +}p and K{sup +}p collisions at 250 GeV/c incident beam momentum. For the first time, it is found that the rapidity dependence of the two-particle transverse momentum correlation is different from that of the mean transverse momentum, but both have similar multiplicity dependence. In particular, the transverse momentum correlations are boost invariant. This is similar to the recently found boost invariance of the charge balance function. A strong azimuthal dependence of the transverse momentum correlations originates from the constraint ofmore » energy-momentum conservation. The results are compared with those from the PYTHIA Monte Carlo generator. The similarities to and differences with the results from current heavy ion experiments are discussed.« less
Rapid Monte Carlo Simulation of Gravitational Wave Galaxies
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2015-01-01
With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.
KEWPIE: A dynamical cascade code for decaying exited compound nuclei
NASA Astrophysics Data System (ADS)
Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David
2004-05-01
A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-06-01
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.
Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A
2011-01-01
Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.
Comparison of shock structure solutions using independent continuum and kinetic theory approaches
NASA Technical Reports Server (NTRS)
Fiscko, Kurt A.; Chapman, Dean R.
1988-01-01
A vehicle traversing the atmosphere will experience flight regimes at high altitudes in which the thickness of a hypersonic shock wave is not small compared to the shock standoff distance from the hard body. When this occurs, it is essential to compute accurate flow field solutions within the shock structure. In this paper, one-dimensional shock structure is investigated for various monatomic gases from Mach 1.4 to Mach 35. Kinetic theory solutions are computed using the Direct Simulation Monte Carlo method. Steady-state solutions of the Navier-Stokes equations and of a slightly truncated form of the Burnett equations are determined by relaxation to a steady state of the time-dependent continuum equations. Monte Carlo results are in excellent agreement with published experimental data and are used as bases of comparison for continuum solutions. For a Maxwellian gas, the truncated Burnett equations are shown to produce far more accurate solutions of shock structure than the Navier-Stokes equations.
Chemical application of diffusion quantum Monte Carlo
NASA Technical Reports Server (NTRS)
Reynolds, P. J.; Lester, W. A., Jr.
1984-01-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.
NASA Astrophysics Data System (ADS)
Lin, J. Y. Y.; Aczel, A. A.; Abernathy, D. L.; Nagler, S. E.; Buyers, W. J. L.; Granroth, G. E.
2014-03-01
Recently neutron spectroscopy measurements, using the ARCS and SEQUOIA time-of-flight chopper spectrometers, observed an extended series of equally spaced modes in UN that are well described by quantum harmonic oscillator behavior of the N atoms. Additional contributions to the scattering are also observed. Monte Carlo ray tracing simulations with various sample kernels have allowed us to distinguish between the response from the N oscillator scattering, contributions that arise from the U partial phonon density of states (PDOS), and all forms of multiple scattering. These simulations confirm that multiple scattering contributes an ~ Q -independent background to the spectrum at the oscillator mode positions. All three of the aforementioned contributions are necessary to accurately model the experimental data. These simulations were also used to compare the T dependence of the oscillator modes in SEQUOIA data to that predicted by the binary solid model. This work was sponsored by the Scientific User Facilities Division, Office of Basic Energy Sciences, U.S. Department of Energy.
Hypersonic shock structure with Burnett terms in the viscous stress and heat flux
NASA Technical Reports Server (NTRS)
Chapman, Dean R.; Fiscko, Kurt A.
1988-01-01
The continuum Navier-Stokes and Burnett equations are solved for one-dimensional shock structure in various monatomic gases. A new numerical method is employed which utilizes the complete time-dependent continuum equations and obtains the steady-state shock structure by allowing the system to relax from arbitrary initial conditions. Included is discussion of numerical difficulties encountered when solving the Burnett equations. Continuum solutions are compared to those obtained utilizing the Direct Simulation Monte Carlo method. Shock solutions are obtained for a hard sphere gas and for argon from Mach 1.3 to Mach 50. Solutions for a Maxwellian gas are obtained from Mach 1.3 to Mach 3.8. It is shown that the Burnett equations yield shock structure solutions in much closer agreement to both Monte Carlo and experimental results than do the Navier-Stokes equations. Shock density thickness, density asymmetry, and density-temperature separation are all more accurately predicted by the Burnett equations than by the Navier-Stokes equations.
Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs
NASA Astrophysics Data System (ADS)
Chhotray, Atul; Lazzati, Davide
2018-05-01
We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph < Rsat regime. Our results reveal new phases of fireball evolution: a transition phase with a radial extent of several orders of magnitude - the fireball transitions from Γ ∝ R to Γ ∝ R0, a post-photospheric acceleration phase - where fireballs accelerate beyond the photosphere and a Thomson-dominated acceleration phase - characterized by slow acceleration of optically thick, matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.
Benchmark solution for the Spencer-Lewis equation of electron transport theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.
As integrated circuits become smaller, the shielding of these sensitive components against penetrating electrons becomes extremely critical. Monte Carlo methods have traditionally been the method of choice in shielding evaluations primarily because they can incorporate a wide variety of relevant physical processes. Recently, however, as a result of a more accurate numerical representation of the highly forward peaked scattering process, S/sub n/ methods for one-dimensional problems have been shown to be at least as cost-effective in comparison with Monte Carlo methods. With the development of these deterministic methods for electron transport, a need has arisen to assess the accuracy ofmore » proposed numerical algorithms and to ensure their proper coding. It is the purpose of this presentation to develop a benchmark to the Spencer-Lewis equation describing the transport of energetic electrons in solids. The solution will take advantage of the correspondence between the Spencer-Lewis equation and the transport equation describing one-group time-dependent neutron transport.« less
Monte Carlo calculations for reporting patient organ doses from interventional radiology
NASA Astrophysics Data System (ADS)
Huo, Wanli; Feng, Mang; Pi, Yifei; Chen, Zhi; Gao, Yiming; Xu, X. George
2017-09-01
This paper describes a project to generate organ dose data for the purposes of extending VirtualDose software from CT imaging to interventional radiology (IR) applications. A library of 23 mesh-based anthropometric patient phantoms were involved in Monte Carlo simulations for database calculations. Organ doses and effective doses of IR procedures with specific beam projection, filed of view (FOV) and beam quality for all parts of body were obtained. Comparing organ doses for different beam qualities, beam projections, patients' ages and patient's body mass indexes (BMIs) which generated by VirtualDose-IR, significant discrepancies were observed. For relatively long time exposure, IR doses depend on beam quality, beam direction and patient size. Therefore, VirtualDose-IR, which is based on the latest anatomically realistic patient phantoms, can generate accurate doses for IR treatment. It is suitable to apply this software in clinical IR dose management as an effective tool to estimate patient doses and optimize IR treatment plans.
Kinetic Monte Carlo simulation of nanoparticle film formation via nanocolloid drying
NASA Astrophysics Data System (ADS)
Kameya, Yuki
2017-06-01
A kinetic Monte Carlo simulation of nanoparticle film formation via nanocolloid drying is presented. The proposed two-dimensional model addresses the dynamics of nanoparticles in the vertical plane of a drying nanocolloid film. The gas-liquid interface movement due to solvent evaporation was controlled by a time-dependent chemical potential, and the resultant particle dynamics including Brownian diffusion and aggregate growth were calculated. Simulations were performed at various Peclet numbers defined based on the rate ratio of solvent evaporation and nanoparticle diffusion. At high Peclet numbers, nanoparticles accumulated at the top layer of the liquid film and eventually formed a skin layer, causing the formation of a particulate film with a densely packed structure. At low Peclet numbers, enhanced particle diffusion led to significant particle aggregation in the bulk colloid, and the resulting film structure became highly porous. The simulated results showed some typical characteristics of a drying nanocolloid that had been reported experimentally. Finally, the potential of the model as well as the remaining challenges are discussed.
NASA Astrophysics Data System (ADS)
Patrone, Paul; Einstein, T. L.; Margetis, Dionisios
2011-03-01
We study a 1+1D, stochastic, Burton-Cabrera-Frank (BCF) model of interacting steps fluctuating on a vicinal crystal. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. Our goal is to formulate and validate a self-consistent mean-field (MF) formalism to approximately solve the system of coupled, nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. We derive formulas for the time-dependent terrace width distribution (TWD) and its steady-state limit. By comparison with kinetic Monte-Carlo simulations, we show that our MF formalism improves upon models in which step interactions are linearized. We also indicate how fitting parameters of our steady state MF TWD may be used to determine the mass transport regime and step interaction energy of certain experimental systems. PP and TLE supported by NSF MRSEC under Grant DMR 05-20471 at U. of Maryland; DM supported by NSF under Grant DMS 08-47587.
The consensus in the two-feature two-state one-dimensional Axelrod model revisited
NASA Astrophysics Data System (ADS)
Biral, Elias J. P.; Tilles, Paulo F. C.; Fontanari, José F.
2015-04-01
The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
NASA Astrophysics Data System (ADS)
Allured, Ryan; Okajima, Takashi; Soufli, Regina; Fernández-Perea, Mónica; Daly, Ryan O.; Marlowe, Hannah; Griffiths, Scott T.; Pivovaroff, Michael J.; Kaaret, Philip
2012-10-01
The Bragg Reflection Polarimeter (BRP) on the NASA Gravity and Extreme Magnetism Small Explorer Mission is designed to measure the linear polarization of astrophysical sources in a narrow band centered at about 500 eV. X-rays are focused by Wolter I mirrors through a 4.5 m focal length to a time projection chamber (TPC) polarimeter, sensitive between 2{10 keV. In this optical path lies the BRP multilayer reflector at a nominal 45 degree incidence angle. The reflector reflects soft X-rays to the BRP detector and transmits hard X-rays to the TPC. As the spacecraft rotates about the optical axis, the reflected count rate will vary depending on the polarization of the incident beam. However, false polarization signals may be produced due to misalignments and spacecraft pointing wobble. Monte-Carlo simulations have been carried out, showing that the false modulation is below the statistical uncertainties for the expected focal plane offsets of < 2 mm.
Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees.
Rabosky, Daniel L
2014-01-01
A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes.
Automatic Detection of Key Innovations, Rate Shifts, and Diversity-Dependence on Phylogenetic Trees
Rabosky, Daniel L.
2014-01-01
A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes. PMID:24586858
Wang, Lei; Troyer, Matthias
2014-09-12
We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice.
Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G
2006-01-01
The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.
MontePython 3: Parameter inference code for cosmology
NASA Astrophysics Data System (ADS)
Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon
2018-05-01
MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.
Sousa, João Miguel; Ferreira, António Luís; Fagg, Duncan Paul; Titus, Elby; Krishna, Rahul; Gracio, José
2012-08-01
Grand canonical Monte Carlo simulations of hydrogen adsorption in zeolites NaA were carried out for a wide range of temperatures between 77 and 300 K and pressures up to 180 MPa. A potential model was used that comprised of three main interactions: van der Waals, coulombic and induced polarization by the electric field in the system. The computed average number of adsorbed molecules per unit cell was compared with available results and found to be in agreement in the regime of moderate to high pressures. The particle insertion method was used to calculate the Henry coefficient for this model and its dependence on temperature.
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirgayussa, I Gde Eka, E-mail: ekadirgayussa@gmail.com; Yani, Sitti; Haryanto, Freddy, E-mail: freddy@fi.itb.ac.id
2015-09-30
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDPmore » and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose difference in PDD and dose profiles were achieve using incident electron energy 6.4 MeV.« less
Event-chain Monte Carlo algorithms for three- and many-particle interactions
NASA Astrophysics Data System (ADS)
Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.
2017-02-01
We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.
Earl, David J; Deem, Michael W
2005-04-14
Adaptive Monte Carlo methods can be viewed as implementations of Markov chains with infinite memory. We derive a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. In convergent cases, our result implies that the balance condition need only be satisfied asymptotically. As an example, we show that the adaptive integration method converges.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
Tao, Guohua; Miller, William H
2012-09-28
An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
Fractional Brownian motion time-changed by gamma and inverse gamma process
NASA Astrophysics Data System (ADS)
Kumar, A.; Wyłomańska, A.; Połoczański, R.; Sundar, S.
2017-02-01
Many real time-series exhibit behavior adequate to long range dependent data. Additionally very often these time-series have constant time periods and also have characteristics similar to Gaussian processes although they are not Gaussian. Therefore there is need to consider new classes of systems to model these kinds of empirical behavior. Motivated by this fact in this paper we analyze two processes which exhibit long range dependence property and have additional interesting characteristics which may be observed in real phenomena. Both of them are constructed as the superposition of fractional Brownian motion (FBM) and other process. In the first case the internal process, which plays role of the time, is the gamma process while in the second case the internal process is its inverse. We present in detail their main properties paying main attention to the long range dependence property. Moreover, we show how to simulate these processes and estimate their parameters. We propose to use a novel method based on rescaled modified cumulative distribution function for estimation of parameters of the second considered process. This method is very useful in description of rounded data, like waiting times of subordinated processes delayed by inverse subordinators. By using the Monte Carlo method we show the effectiveness of proposed estimation procedures. Finally, we present the applications of proposed models to real time series.
Minimal model for the secondary structures and conformational conversions in proteins
NASA Astrophysics Data System (ADS)
Imamura, Hideo
Better understanding of protein folding process can provide physical insights on the function of proteins and makes it possible to benefit from genetic information accumulated so far. Protein folding process normally takes place in less than seconds but even seconds are beyond reach of current computational power for simulations on a system of all-atom detail. Hence, to model and explore protein folding process it is crucial to construct a proper model that can adequately describe the physical process and mechanism for the relevant time scale. We discuss the reduced off-lattice model that can express _-helix and ?-hairpin conformations defined solely by a given sequence in order to investigate a protein folding mechanism of conformations such as a ?-hairpin and also to investigate conformational conversions in proteins. The first two chapters introduce and review essential concepts in protein folding modelling physical interaction in proteins, various simple models, and also review computational methods, in particular, the Metropolis Monte Carlo method, its dynamic interpretation and thermodynamic Monte Carlo algorithms. Chapter 3 describes the minimalist model that represents both _-helix and ?-sheet conformations using simple potentials. The native conformation can be specified by the sequence without particular conformational biases to a reference state. In Chapter 4, the model is used to investigate the folding mechanism of ?-hairpins exhaustively using the dynamic Monte Carlo and a thermodynamic Monte Carlo method an effcient combination of the multicanonical Monte Carlo and the weighted histogram analysis method. We show that the major folding pathways and folding rate depend on the location of a hydrophobic. The conformational conversions between _-helix and ?-sheet conformations are examined in Chapter 5 and 6. First, the conformational conversion due to mutation in a non-hydrophobic system and then the conformational conversion due to mutation with a hydrophobic pair at a different position at various temperatures are examined.
Marko, Matthew David; Kyle, Jonathan P; Wang, Yuanyuan Sabrina; Terrell, Elon J
2017-01-01
An effort was made to study and characterize the evolution of transient tribological wear in the presence of sliding contact. Sliding contact is often characterized experimentally via the standard ASTM D4172 four-ball test, and these tests were conducted for varying times ranging from 10 seconds to 1 hour, as well as at varying temperatures and loads. A numerical model was developed to simulate the evolution of wear in the elastohydrodynamic regime. This model uses the results of a Monte Carlo study to develop novel empirical equations for wear rate as a function of asperity height and lubricant thickness; these equations closely represented the experimental data and successfully modeled the sliding contact.
Anisotropic dielectric properties of two-dimensional matrix in pseudo-spin ferroelectric system
NASA Astrophysics Data System (ADS)
Kim, Se-Hun
2016-10-01
The anisotropic dielectric properties of a two-dimensional (2D) ferroelectric system were studied using the statistical calculation of the pseudo-spin Ising Hamiltonian model. It is necessary to delay the time for measurements of the observable and the independence of the new spin configuration under Monte Carlo sampling, in which the thermal equilibrium state depends on the temperature and size of the system. The autocorrelation time constants of the normalized relaxation function were determined by taking temperature and 2D lattice size into account. We discuss the dielectric constants of a two-dimensional ferroelectric system by using the Metropolis method in view of the Slater-Takagi defect energies.
NASA Astrophysics Data System (ADS)
Naglič, Peter; Ivančič, Matic; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran
2018-02-01
A measurement system was developed to acquire and analyze subdiffusive spatially resolved reflectance using an optical fiber probe with short source-detector separations. Since subdiffusive reflectance significantly depends on the scattering phase function, the analysis of the acquired reflectance is based on a novel inverse Monte Carlo model that allows estimation of phase function related parameters in addition to the absorption and reduced scattering coefficients. In conjunction with our measurement system, the model allowed real-time estimation of optical properties, which we demonstrate for a case of dynamically induced changes in human skin by applying pressure with an optical fiber probe.
Othman, M A R; Cutajar, D L; Hardcastle, N; Guatelli, S; Rosenfeld, A B
2010-09-01
Monte Carlo simulations of the energy response of a conventionally packaged single metal-oxide field effect transistors (MOSFET) detector were performed with the goal of improving MOSFET energy dependence for personal accident or military dosimetry. The MOSFET detector packaging was optimised. Two different 'drop-in' design packages for a single MOSFET detector were modelled and optimised using the GEANT4 Monte Carlo toolkit. Absorbed photon dose simulations of the MOSFET dosemeter placed in free-air response, corresponding to the absorbed doses at depths of 0.07 mm (D(w)(0.07)) and 10 mm (D(w)(10)) in a water equivalent phantom of size 30 x 30 x 30 cm(3) for photon energies of 0.015-2 MeV were performed. Energy dependence was reduced to within + or - 60 % for photon energies 0.06-2 MeV for both D(w)(0.07) and D(w)(10). Variations in the response for photon energies of 15-60 keV were 200 and 330 % for D(w)(0.07) and D(w)(10), respectively. The obtained energy dependence was reduced compared with that for conventionally packaged MOSFET detectors, which usually exhibit a 500-700 % over-response when used in free-air geometry.
Reynolds analogy for the Rayleigh problem at various flow modes.
Abramov, A A; Butkovskii, A V
2016-07-01
The Reynolds analogy and the extended Reynolds analogy for the Rayleigh problem are considered. For a viscous incompressible fluid we derive the Reynolds analogy as a function of the Prandtl number and the Eckert number. We show that for any positive Eckert number, the Reynolds analogy as a function of the Prandtl number has a maximum. For a monatomic gas in the transitional flow regime, using the direct simulation Monte Carlo method, we investigate the extended Reynolds analogy, i.e., the relation between the shear stress and the energy flux transferred to the boundary surface, at different velocities and temperatures. We find that the extended Reynolds analogy for a rarefied monatomic gas flow with the temperature of the undisturbed gas equal to the surface temperature depends weakly on time and is close to 0.5. We show that at any fixed dimensionless time the extended Reynolds analogy depends on the plate velocity and temperature and undisturbed gas temperature mainly via the Eckert number. For Eckert numbers of the order of unity or less we generalize an extended Reynolds analogy. The generalized Reynolds analogy depends mainly only on dimensionless time for all considered Eckert numbers of the order of unity or less.
Orientation-dependent integral equation theory for a two-dimensional model of water
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.
2003-03-01
We develop an integral equation theory that applies to strongly associating orientation-dependent liquids, such as water. In an earlier treatment, we developed a Wertheim integral equation theory (IET) that we tested against NPT Monte Carlo simulations of the two-dimensional Mercedes Benz model of water. The main approximation in the earlier calculation was an orientational averaging in the multidensity Ornstein-Zernike equation. Here we improve the theory by explicit introduction of an orientation dependence in the IET, based upon expanding the two-particle angular correlation function in orthogonal basis functions. We find that the new orientation-dependent IET (ODIET) yields a considerable improvement of the predicted structure of water, when compared to the Monte Carlo simulations. In particular, ODIET predicts more long-range order than the original IET, with hexagonal symmetry, as expected for the hydrogen bonded ice in this model. The new theoretical approximation still errs in some subtle properties; for example, it does not predict liquid water's density maximum with temperature or the negative thermal expansion coefficient.
Survival estimation and the effects of dependency among animals
Schmutz, Joel A.; Ward, David H.; Sedinger, James S.; Rexstad, Eric A.
1995-01-01
Survival models assume that fates of individuals are independent, yet the robustness of this assumption has been poorly quantified. We examine how empirically derived estimates of the variance of survival rates are affected by dependency in survival probability among individuals. We used Monte Carlo simulations to generate known amounts of dependency among pairs of individuals and analyzed these data with Kaplan-Meier and Cormack-Jolly-Seber models. Dependency significantly increased these empirical variances as compared to theoretically derived estimates of variance from the same populations. Using resighting data from 168 pairs of black brant, we used a resampling procedure and program RELEASE to estimate empirical and mean theoretical variances. We estimated that the relationship between paired individuals caused the empirical variance of the survival rate to be 155% larger than the empirical variance for unpaired individuals. Monte Carlo simulations and use of this resampling strategy can provide investigators with information on how robust their data are to this common assumption of independent survival probabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, H; Ding, H; Ziemer, B
Purpose: To investigate the feasibility of energy calibration and energy response characterization of a photon counting detector using x-ray fluorescence. Methods: A comprehensive Monte Carlo simulation study was done to investigate the influence of various geometric components on the x-ray fluorescence measurement. Different materials, sizes, and detection angles were simulated using Geant4 Application for Tomographic Emission (GATE) Monte Carlo package. Simulations were conducted using 100 kVp tungsten-anode spectra with 2 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The fluorescence material was placed 300 mm away from both themore » x-ray source and the detector. For angular dependence measurement, the distance was decreased to 30 mm to reduce the simulation time. Compound materials, containing silver, barium, gadolinium, hafnium, and gold in cylindrical shape, were simulated. The object size varied from 5 to 100 mm in diameter. The angular dependence of fluorescence and scatter were simulated from 20° to 170° with an incremental step of 10° to optimize the fluorescence to scatter ratio. Furthermore, the angular dependence was also experimentally measured using a spectrometer (X-123CdTe, Amptek Inc., MA) to validate the simulation results. Results: The detection angle between 120° to 160° resulted in more optimal x-ray fluorescence to scatter ratio. At a detection angle of 120°, the object size did not have a significant effect on the fluorescence to scatter ratio. The experimental results of fluorescence angular dependence are in good agreement with the simulation results. The Kα and Kβ peaks of five materials could be identified. Conclusion: The simulation results show that the x-ray fluorescence procedure has the potential to be used for detector energy calibration and detector response characteristics by using the optimal system geometry.« less
NASA Astrophysics Data System (ADS)
Hall, Gregory; Xu, Hong; Forthomme, Damien; Dagdigian, Paul; Sears, Trevor
2017-06-01
We have combined experimental and theoretical approaches to the competition between elastic and inelastic collisions of CN radicals with Ar, and how this competition influences time-resolved saturation spectra. Experimentally, we have measured transient, two-color sub-Doppler saturation spectra of CN radicals with an amplitude chopped saturation laser tuned to selected Doppler offsets within rotational lines of the A-X (2-0) band, while scanning a frequency modulated probe laser across the hyperfine-resolved saturation features of corresponding rotational lines of the A-X (1-0) band. A steady-state depletion spectrum includes off-resonant contributions ascribed to velocity diffusion, and the saturation recovery rates depend on the sub-Doppler detuning. The experimental results are compared with Monte Carlo solutions to the Boltzmann equation for the collisional evolution of the velocity distributions of CN radicals, combined with a pressure-dependent and speed-dependent lifetime broadening. Velocity changing collisions are included by appropriately sampling the energy resolved differential cross sections for elastic scattering of selected rotational states of CN (X). The velocity space diffusion of Doppler tagged molecules proceeds through a series of small-angle scattering events, eventually terminating in an inelastic collision that removes the molecule from the coherently driven ensemble of interest. Collision energy-dependent total cross sections and differential cross sections for elastic scattering of selected CN rotational states with Ar were computed with Hibridon quantum scattering calculations, and used for sampling in the Monte Carlo modeling. Acknowledgments: Work at Brookhaven National Laboratory was carried out under Contract No. DE-SC0012704 with the U.S. Department of Energy, Office of Science, and supported by its Division of Chemical Sciences, Geosciences and Biosciences within the Office of Basic Energy Sciences.
NASA Astrophysics Data System (ADS)
Kwan, Betty P.; O'Brien, T. Paul
2015-06-01
The Aerospace Corporation performed a study to determine whether static percentiles of AE9/AP9 can be used to approximate dynamic Monte Carlo runs for radiation analysis of spiral transfer orbits. Solar panel degradation is a major concern for solar-electric propulsion because solar-electric propulsion depends on the power output of the solar panel. Different spiral trajectories have different radiation environments that could lead to solar panel degradation. Because the spiral transfer orbits only last weeks to months, an average environment does not adequately address the possible transient enhancements of the radiation environment that must be accounted for in optimizing the transfer orbit trajectory. Therefore, to optimize the trajectory, an ensemble of Monte Carlo simulations of AE9/AP9 would normally be run for every spiral trajectory to determine the 95th percentile radiation environment. To avoid performing lengthy Monte Carlo dynamic simulations for every candidate spiral trajectory in the optimization, we found a static percentile that would be an accurate representation of the full Monte Carlo simulation for a representative set of spiral trajectories. For 3 LEO to GEO and 1 LEO to MEO trajectories, a static 90th percentile AP9 is a good approximation of the 95th percentile fluence with dynamics for 4-10 MeV protons, and a static 80th percentile AE9 is a good approximation of the 95th percentile fluence with dynamics for 0.5-2 MeV electrons. While the specific percentiles chosen cannot necessarily be used in general for other orbit trade studies, the concept of determining a static percentile as a quick approximation to a full Monte Carlo ensemble of simulations can likely be applied to other orbit trade studies. We expect the static percentile to depend on the region of space traversed, the mission duration, and the radiation effect considered.
Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H
2005-01-01
Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.
Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey
Parsons, T.
2004-01-01
New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.
A liquid xenon imaging telescope for 1-30 MeV gamma-ray astrophysics
NASA Technical Reports Server (NTRS)
Aprile, Elena; Mukherjee, Reshmi; Suzuki, Masayo
1989-01-01
A study of the primary scintillation light in liquid xenon excited by 241 Am alpha particles and 207 Bi internal conversion electrons are discussed. The time dependence and the intensity of the light at different field strengths have been measured with a specifically designed chamber, equipped with a CaF sub 2 light transmitting window coupled to a UV sensitive PMT. The time correlation between the fast light signal and the charge signal shows that the scintillation signals produced in liquid xenon by ionizing particles provides an ideal trigger in a Time Projection type LXe detector aiming at full imaging of complex gamma-ray events. Researchers also started Monte Carlo calculations to establish the performance of a LXe imaging telescope for high energy gamma-rays.
Monte Carlo simulation of chemistry following radiolysis with TOPAS-nBio
NASA Astrophysics Data System (ADS)
Ramos-Méndez, J.; Perl, J.; Schuemann, J.; McNamara, A.; Paganetti, H.; Faddegon, B.
2018-05-01
Simulation of water radiolysis and the subsequent chemistry provides important information on the effect of ionizing radiation on biological material. The Geant4 Monte Carlo toolkit has added chemical processes via the Geant4-DNA project. The TOPAS tool simplifies the modeling of complex radiotherapy applications with Geant4 without requiring advanced computational skills, extending the pool of users. Thus, a new extension to TOPAS, TOPAS-nBio, is under development to facilitate the configuration of track-structure simulations as well as water radiolysis simulations with Geant4-DNA for radiobiological studies. In this work, radiolysis simulations were implemented in TOPAS-nBio. Users may now easily add chemical species and their reactions, and set parameters including branching ratios, dissociation schemes, diffusion coefficients, and reaction rates. In addition, parameters for the chemical stage were re-evaluated and updated from those used by default in Geant4-DNA to improve the accuracy of chemical yields. Simulation results of time-dependent and LET-dependent primary yields Gx (chemical species per 100 eV deposited) produced at neutral pH and 25 °C by short track-segments of charged particles were compared to published measurements. The LET range was 0.05–230 keV µm‑1. The calculated Gx values for electrons satisfied the material balance equation within 0.3%, similar for protons albeit with long calculation time. A smaller geometry was used to speed up proton and alpha simulations, with an acceptable difference in the balance equation of 1.3%. Available experimental data of time-dependent G-values for agreed with simulated results within 7% ± 8% over the entire time range; for over the full time range within 3% ± 4% for H2O2 from 49% ± 7% at earliest stages and 3% ± 12% at saturation. For the LET-dependent Gx, the mean ratios to the experimental data were 1.11 ± 0.98, 1.21 ± 1.11, 1.05 ± 0.52, 1.23 ± 0.59 and 1.49 ± 0.63 (1 standard deviation) for , , H2, H2O2 and , respectively. In conclusion, radiolysis and subsequent chemistry with Geant4-DNA has been successfully incorporated in TOPAS-nBio. Results are in reasonable agreement with published measured and simulated data.
Model predictions for atmospheric air breakdown by radio-frequency excitation in large gaps
NASA Astrophysics Data System (ADS)
Nguyen, H. K.; Mankowski, J.; Dickens, J. C.; Neuber, A. A.; Joshi, R. P.
2017-07-01
The behavior of the breakdown electric field versus frequency (DC to 100 MHz) for different gap lengths has been studied numerically at atmospheric pressure. Unlike previous reports, the focus here is on much larger gap lengths in the 1-5 cm range. A numerical analysis, with transport coefficients obtained from Monte Carlo calculations, is used to ascertain the electric field thresholds at which the growth and extinction of the electron population over time are balanced. Our analysis is indicative of a U-shaped frequency dependence, lower breakdown fields with increasing gap lengths, and trends qualitatively similar to the frequency-dependent field behavior for microgaps. The low frequency value of ˜34 kV/cm for a 1 cm gap approaches the reported DC Paschen limit.
Effect of lag time distribution on the lag phase of bacterial growth - a Monte Carlo analysis
USDA-ARS?s Scientific Manuscript database
The objective of this study is to use Monte Carlo simulation to evaluate the effect of lag time distribution of individual bacterial cells incubated under isothermal conditions on the development of lag phase. The growth of bacterial cells of the same initial concentration and mean lag phase durati...
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Bush, K; Han, B
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
Lettieri, S.; Zuckerman, D.M.
2011-01-01
Typically, the most time consuming part of any atomistic molecular simulation is due to the repeated calculation of distances, energies and forces between pairs of atoms. However, many molecules contain nearly rigid multi-atom groups such as rings and other conjugated moieties, whose rigidity can be exploited to significantly speed up computations. The availability of GB-scale random-access memory (RAM) offers the possibility of tabulation (pre-calculation) of distance and orientation-dependent interactions among such rigid molecular bodies. Here, we perform an investigation of this energy tabulation approach for a fluid of atomistic – but rigid – benzene molecules at standard temperature and density. In particular, using O(1) GB of RAM, we construct an energy look-up table which encompasses the full range of allowed relative positions and orientations between a pair of whole molecules. We obtain a hardware-dependent speed-up of a factor of 24-50 as compared to an ordinary (“exact”) Monte Carlo simulation and find excellent agreement between energetic and structural properties. Second, we examine the somewhat reduced fidelity of results obtained using energy tables based on much less memory use. Third, the energy table serves as a convenient platform to explore potential energy smoothing techniques, akin to coarse-graining. Simulations with smoothed tables exhibit near atomistic accuracy while increasing diffusivity. The combined speed-up in sampling from tabulation and smoothing exceeds a factor of 100. For future applications greater speed-ups can be expected for larger rigid groups, such as those found in biomolecules. PMID:22120971
NASA Astrophysics Data System (ADS)
Filippi, Claudia; Buda, Francesco
2005-02-01
We find that regions of the excited state potential energy surface of formaldimine, which are accessible from the Franck-Condon configuration, are incorrectly described by the restricted open-shell Kohn-Sham (ROKS) approach. In these regions, the deviations of the ROKS energies from the time-dependent density functional theory results are not a simple shift. Contrary to what is argued in the Comment by Doltsinis and Fink [J. Chem. Phys.XX, XXX (2004)], these differences can play a role in the excited state molecular dynamics of formaldimine at finite temperature.
Optimization of the time-dependent traveling salesman problem with Monte Carlo methods.
Bentner, J; Bauer, G; Obermair, G M; Morgenstern, I; Schneider, J
2001-09-01
A problem often considered in operations research and computational physics is the traveling salesman problem, in which a traveling salesperson has to find the shortest closed tour between a certain set of cities. This problem has been extended to more realistic scenarios, e.g., the "real" traveling salesperson has to take rush hours into consideration. We will show how this extended problem is treated with physical optimization algorithms. We will present results for a specific instance of Reinelt's library TSPLIB95, in which we define a zone with traffic jams in the afternoon.
Laplace Transform Based Radiative Transfer Studies
NASA Astrophysics Data System (ADS)
Hu, Y.; Lin, B.; Ng, T.; Yang, P.; Wiscombe, W.; Herath, J.; Duffy, D.
2006-12-01
Multiple scattering is the major uncertainty for data analysis of space-based lidar measurements. Until now, accurate quantitative lidar data analysis has been limited to very thin objects that are dominated by single scattering, where photons from the laser beam only scatter a single time with particles in the atmosphere before reaching the receiver, and simple linear relationship between physical property and lidar signal exists. In reality, multiple scattering is always a factor in space-based lidar measurement and it dominates space- based lidar returns from clouds, dust aerosols, vegetation canopy and phytoplankton. While multiple scattering are clear signals, the lack of a fast-enough lidar multiple scattering computation tool forces us to treat the signal as unwanted "noise" and use simple multiple scattering correction scheme to remove them. Such multiple scattering treatments waste the multiple scattering signals and may cause orders of magnitude errors in retrieved physical properties. Thus the lack of fast and accurate time-dependent radiative transfer tools significantly limits lidar remote sensing capabilities. Analyzing lidar multiple scattering signals requires fast and accurate time-dependent radiative transfer computations. Currently, multiple scattering is done with Monte Carlo simulations. Monte Carlo simulations take minutes to hours and are too slow for interactive satellite data analysis processes and can only be used to help system / algorithm design and error assessment. We present an innovative physics approach to solve the time-dependent radiative transfer problem. The technique utilizes FPGA based reconfigurable computing hardware. The approach is as following, 1. Physics solution: Perform Laplace transform on the time and spatial dimensions and Fourier transform on the viewing azimuth dimension, and convert the radiative transfer differential equation solving into a fast matrix inversion problem. The majority of the radiative transfer computation goes to matrix inversion processes, FFT and inverse Laplace transforms. 2. Hardware solutions: Perform the well-defined matrix inversion, FFT and Laplace transforms on highly parallel, reconfigurable computing hardware. This physics-based computational tool leads to accurate quantitative analysis of space-based lidar signals and improves data quality of current lidar mission such as CALIPSO. This presentation will introduce the basic idea of this approach, preliminary results based on SRC's FPGA-based Mapstation, and how we may apply it to CALIPSO data analysis.
NASA Astrophysics Data System (ADS)
Lawler, J. E.; Den Hartog, E. A.
2018-03-01
The Ar I and II branching ratio calibration method is discussed with the goal of improving the technique. This method of establishing a relative radiometric calibration is important in ongoing research to improve atomic transition probabilities for quantitative spectroscopy in astrophysics and other fields. Specific suggestions are presented along with Monte Carlo simulations of wavelength dependent effects from scattering/reflecting of photons in a hollow cathode.
Monte Carlo study of exact {ital S}-matrix duality in nonsimply laced affine Toda theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beccaria, M.
The ({ital g}{sub 2}{sup (1)},{ital d}{sub 4}{sup (3)}) pair of nonsimply laced affine Toda theories is studied from the point of view of nonperturbative duality. The classical spectrum of each member is composed of two massive scalar particles. The exact {ital S}-matrix prediction for the dual behavior of the coupling-dependent mass ratio is found to be in strong agreement with Monte Carlo data. {copyright} {ital 1996 The American Physical Society.}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeo, Sang Chul; Lee, Hyuck Mo, E-mail: hmlee@kaist.ac.kr; Lo, Yu Chieh
2014-10-07
Ammonia (NH{sub 3}) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (E{sub b}) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (E{sub b}) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH{sub 3} nitridation rate onmore » the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH{sub 3} nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH{sub 3} nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH{sub 3} nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.« less
Dhamodharan, Aswin; Proano, Ruben A
2012-09-01
Outreach immunization services, in which health workers immunize children in their own communities, are indispensable to improve vaccine coverage in rural areas of developing countries. One of the challenges faced by these services is how to reduce high levels of vaccine wastage. In particular, the open vial wastage (OVW) that result from the vaccine doses remaining in a vial after a time for safe use -since opening the vial- has elapsed. This wastage is highly dependent on the choice of vial size and the expected number of participants for which the outreach session is planned (i.e., session size). The use single-dose vials results in zero OVW, but it increases the vaccine purchase, transportation, and holding costs per dose as compared to those resulting from using larger vial sizes. The OVW also decreases when more people are immunized in a session. However, controlling the actual number of people that show to an outreach session in rural areas of developing countries highly depends on factors that are out of control of the immunization planners. This paper integrates a binary integer-programming model to a Monte Carlo simulation method to determine the choice of vial size and the optimal reordering point level to implement an (nQ, r, T) lot-sizing policy that provides the best tradeoff between procurement costs and wastage.
Aristizabal, F.; Glavinovic, M. I.
2003-01-01
Tracking spectral changes of rapidly varying signals is a demanding task. In this study, we explore on Monte Carlo-simulated glutamate-activated AMPA patch and synaptic currents whether a wavelet analysis offers such a possibility. Unlike Fourier methods that determine only the frequency content of a signal, the wavelet analysis determines both the frequency and the time. This is owing to the nature of the basis functions, which are infinite for Fourier transforms (sines and cosines are infinite), but are finite for wavelet analysis (wavelets are localized waves). In agreement with previous reports, the frequency of the stationary patch current fluctuations is higher for larger currents, whereas the mean-variance plots are parabolic. The spectra of the current fluctuations and mean-variance plots are close to the theoretically predicted values. The median frequency of the synaptic and nonstationary patch currents is, however, time dependent, though at the peak of synaptic currents, the median frequency is insensitive to the number of glutamate molecules released. Such time dependence demonstrates that the “composite spectra” of the current fluctuations gathered over the whole duration of synaptic currents cannot be used to assess the mean open time or effective mean open time of AMPA channels. The current (patch or synaptic) versus median frequency plots show hysteresis. The median frequency is thus not a simple reflection of the overall receptor saturation levels and is greater during the rise phase for the same saturation level. The hysteresis is due to the higher occupancy of the doubly bound state during the rise phase and not due to the spatial spread of the saturation disk, which remains remarkably constant. Albeit time dependent, the variance of the synaptic and nonstationary patch currents can be accurately determined. Nevertheless the evaluation of the number of AMPA channels and their single current from the mean-variance plots of patch or synaptic currents is not highly accurate owing to the varying number of the activatable AMPA channels caused by desensitization. The spatial nonuniformity of open, bound, and desensitized AMPA channels, and the time dependence and spatial nonuniformity of the glutamate concentration in the synaptic cleft, further reduce the accuracy of estimates of the number of AMPA channels from synaptic currents. In conclusion, wavelet analysis of nonstationary fluctuations of patch and synaptic currents expands our ability to determine accurately the variance and frequency of current fluctuations, demonstrates the limits of applicability of techniques currently used to evaluate the single channel current and number of AMPA channels, and offers new insights into the mechanisms involved in the generation of unitary quantal events at excitatory central synapses. PMID:14507683
Aristizabal, F; Glavinovic, M I
2003-10-01
Tracking spectral changes of rapidly varying signals is a demanding task. In this study, we explore on Monte Carlo-simulated glutamate-activated AMPA patch and synaptic currents whether a wavelet analysis offers such a possibility. Unlike Fourier methods that determine only the frequency content of a signal, the wavelet analysis determines both the frequency and the time. This is owing to the nature of the basis functions, which are infinite for Fourier transforms (sines and cosines are infinite), but are finite for wavelet analysis (wavelets are localized waves). In agreement with previous reports, the frequency of the stationary patch current fluctuations is higher for larger currents, whereas the mean-variance plots are parabolic. The spectra of the current fluctuations and mean-variance plots are close to the theoretically predicted values. The median frequency of the synaptic and nonstationary patch currents is, however, time dependent, though at the peak of synaptic currents, the median frequency is insensitive to the number of glutamate molecules released. Such time dependence demonstrates that the "composite spectra" of the current fluctuations gathered over the whole duration of synaptic currents cannot be used to assess the mean open time or effective mean open time of AMPA channels. The current (patch or synaptic) versus median frequency plots show hysteresis. The median frequency is thus not a simple reflection of the overall receptor saturation levels and is greater during the rise phase for the same saturation level. The hysteresis is due to the higher occupancy of the doubly bound state during the rise phase and not due to the spatial spread of the saturation disk, which remains remarkably constant. Albeit time dependent, the variance of the synaptic and nonstationary patch currents can be accurately determined. Nevertheless the evaluation of the number of AMPA channels and their single current from the mean-variance plots of patch or synaptic currents is not highly accurate owing to the varying number of the activatable AMPA channels caused by desensitization. The spatial nonuniformity of open, bound, and desensitized AMPA channels, and the time dependence and spatial nonuniformity of the glutamate concentration in the synaptic cleft, further reduce the accuracy of estimates of the number of AMPA channels from synaptic currents. In conclusion, wavelet analysis of nonstationary fluctuations of patch and synaptic currents expands our ability to determine accurately the variance and frequency of current fluctuations, demonstrates the limits of applicability of techniques currently used to evaluate the single channel current and number of AMPA channels, and offers new insights into the mechanisms involved in the generation of unitary quantal events at excitatory central synapses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Rourke, Patrick Francis
The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
Feasibility study of the neutron dose for real-time image-guided proton therapy: A Monte Carlo study
NASA Astrophysics Data System (ADS)
Kim, Jin Sung; Shin, Jung Suk; Kim, Daehyun; Shin, Eunhyuk; Chung, Kwangzoo; Cho, Sungkoo; Ahn, Sung Hwan; Ju, Sanggyu; Chung, Yoonsun; Jung, Sang Hoon; Han, Youngyih
2015-07-01
Two full rotating gantries with different nozzles (multipurpose nozzle with MLC, scanning dedicated nozzle) for a conventional cyclotron system are installed and being commissioned for various proton treatment options at Samsung Medical Center in Korea. The purpose of this study is to use Monte Carlo simulation to investigate the neutron dose equivalent per therapeutic dose, H/D, for X-ray imaging equipment under various treatment conditions. At first, we investigated the H/D for various modifications of the beamline devices (scattering, scanning, multi-leaf collimator, aperture, compensator) at the isocenter and at 20, 40 and 60 cm distances from the isocenter, and we compared our results with those of other research groups. Next, we investigated the neutron dose at the X-ray equipment used for real-time imaging under various treatment conditions. Our investigation showed doses of 0.07 ~ 0.19 mSv/Gy at the X-ray imaging equipment, depending on the treatment option and interestingly, the 50% neutron dose reduction was observed due to multileaf collimator during proton scanning treatment with the multipurpose nozzle. In future studies, we plan to measure the neutron dose experimentally and to validate the simulation data for X-ray imaging equipment for use as an additional neutron dose reduction method.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
NeuRad detector prototype pulse shape study
NASA Astrophysics Data System (ADS)
Muzalevsky, I.; Chudoba, V.; Belogurov, S.; Kiselev, O.; Bezbakh, A.; Fomichev, A.; Krupko, S.; Slepnev, R.; Kostyleva, D.; Gorshkov, A.; Ovcharenko, E.; Schetinin, V.
2018-04-01
The EXPERT setup located at the Super-FRS facility, the part of the FAIR complex in Darmstadt, Germany, is intended for investigation of properties of light exotic nuclei. One of its modules, the high granularity neutron detector NeuRad assembled from a large number of the scintillating fiber is intended for registration of neutrons emitted by investigated nuclei in low-energy decays. Feasibility of the detector strongly depends on its timing properties defined by the spatial distribution of ionization, light propagation inside the fibers, light emission kinetics and transition time jitter in the multi-anode photomultiplier tube. The first attempt of understanding the pulse formation in the prototype of the NeuRad detector by comparing experimental results and Monte Carlo (MC) simulations is reported in this paper.
Finite-temperature time-dependent variation with multiple Davydov states
NASA Astrophysics Data System (ADS)
Wang, Lu; Fujihashi, Yuta; Chen, Lipeng; Zhao, Yang
2017-03-01
The Dirac-Frenkel time-dependent variational approach with Davydov Ansätze is a sophisticated, yet efficient technique to obtain an accurate solution to many-body Schrödinger equations for energy and charge transfer dynamics in molecular aggregates and light-harvesting complexes. We extend this variational approach to finite temperature dynamics of the spin-boson model by adopting a Monte Carlo importance sampling method. In order to demonstrate the applicability of this approach, we compare calculated real-time quantum dynamics of the spin-boson model with that from numerically exact iterative quasiadiabatic propagator path integral (QUAPI) technique. The comparison shows that our variational approach with the single Davydov Ansätze is in excellent agreement with the QUAPI method at high temperatures, while the two differ at low temperatures. Accuracy in dynamics calculations employing a multitude of Davydov trial states is found to improve substantially over the single Davydov Ansatz, especially at low temperatures. At a moderate computational cost, our variational approach with the multiple Davydov Ansatz is shown to provide accurate spin-boson dynamics over a wide range of temperatures and bath spectral densities.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
A signed particle formulation of non-relativistic quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg
2015-09-15
A formulation of non-relativistic quantum mechanics in terms of Newtonian particles is presented in the shape of a set of three postulates. In this new theory, quantum systems are described by ensembles of signed particles which behave as field-less classical objects which carry a negative or positive sign and interact with an external potential by means of creation and annihilation events only. This approach is shown to be a generalization of the signed particle Wigner Monte Carlo method which reconstructs the time-dependent Wigner quasi-distribution function of a system and, therefore, the corresponding Schrödinger time-dependent wave-function. Its classical limit is discussedmore » and a physical interpretation, based on experimental evidences coming from quantum tomography, is suggested. Moreover, in order to show the advantages brought by this novel formulation, a straightforward extension to relativistic effects is discussed. To conclude, quantum tunnelling numerical experiments are performed to show the validity of the suggested approach.« less
Evidence of impurity and boundary effects on magnetic monopole dynamics in spin ice
NASA Astrophysics Data System (ADS)
Revell, H. M.; Yaraskavitch, L. R.; Mason, J. D.; Ross, K. A.; Noad, H. M. L.; Dabkowska, H. A.; Gaulin, B. D.; Henelius, P.; Kycia, J. B.
2013-01-01
Electrical resistance is a crucial and well-understood property of systems ranging from computer microchips to nerve impulse propagation in the human body. Here we study the motion of magnetic charges in spin ice and find that extra spins inserted in Dy2Ti2O7 trap magnetic monopole excitations and provide the first example of how defects in a spin-ice material obstruct the flow of monopoles--a magnetic version of residual resistance. We measure the time-dependent magnetic relaxation in Dy2Ti2O7 and show that it decays with a stretched exponential followed by a very slow long-time tail. In a Monte Carlo simulation governed by Metropolis dynamics we show that surface effects and a very low level of stuffed spins (0.30%)--magnetic Dy ions substituted for non-magnetic Ti ions--cause these signatures in the relaxation. In addition, we find evidence that the rapidly diverging experimental timescale is due to a temperature-dependent attempt rate proportional to the monopole density.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veres, P.; Dermer, C. D.; Dhuga, K. S.
The magnetic field in intergalactic space gives important information about magnetogenesis in the early universe. The properties of this field can be probed by searching for radiation of secondary e {sup +} e {sup −} pairs created by TeV photons that produce GeV range radiation by Compton-scattering cosmic microwave background photons. The arrival times of the GeV “echo” photons depend strongly on the magnetic field strength and coherence length. A Monte Carlo code that accurately treats pair creation is developed to simulate the spectrum and time-dependence of the echo radiation. The extrapolation of the spectrum of powerful gamma-ray bursts (GRBs)more » like GRB 130427A to TeV energies is used to demonstrate how the intergalactic magnetic field can be constrained if it falls in the 10{sup −21}–10{sup −17} G range for a 1 Mpc coherence length.« less
Angular dependence of the nanoDot OSL dosimeter.
Kerns, James R; Kry, Stephen F; Sahoo, Narayan; Followill, David S; Ibbott, Geoffrey S
2011-07-01
Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight/OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system. Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, as well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX. When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found. A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions.
Angular dependence of the nanoDot OSL dosimeter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerns, James R.; Kry, Stephen F.; Sahoo, Narayan
Purpose: Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight/OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system. Methods: Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, asmore » well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX. Results: When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found. Conclusions: A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions.« less
Angular dependence of the nanoDot OSL dosimeter
Kerns, James R.; Kry, Stephen F.; Sahoo, Narayan; Followill, David S.; Ibbott, Geoffrey S.
2011-01-01
Purpose: Optically stimulated luminescent detectors (OSLDs) are quickly gaining popularity as passive dosimeters, with applications in medicine for linac output calibration verification, brachytherapy source verification, treatment plan quality assurance, and clinical dose measurements. With such wide applications, these dosimeters must be characterized for numerous factors affecting their response. The most abundant commercial OSLD is the InLight∕OSL system from Landauer, Inc. The purpose of this study was to examine the angular dependence of the nanoDot dosimeter, which is part of the InLight system.Methods: Relative dosimeter response data were taken at several angles in 6 and 18 MV photon beams, as well as a clinical proton beam. These measurements were done within a phantom at a depth beyond the build-up region. To verify the observed angular dependence, additional measurements were conducted as well as Monte Carlo simulations in MCNPX.Results: When irradiated with the incident photon beams parallel to the plane of the dosimeter, the nanoDot response was 4% lower at 6 MV and 3% lower at 18 MV than the response when irradiated with the incident beam normal to the plane of the dosimeter. Monte Carlo simulations at 6 MV showed similar results to the experimental values. Examination of the results in Monte Carlo suggests the cause as partial volume irradiation. In a clinical proton beam, no angular dependence was found.Conclusions: A nontrivial angular response of this OSLD was observed in photon beams. This factor may need to be accounted for when evaluating doses from photon beams incident from a variety of directions. PMID:21858992
Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region
NASA Astrophysics Data System (ADS)
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.
Hybrid Monte Carlo-diffusion method for light propagation in tissue with a low-scattering region.
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.
Kawrakow, I
2000-03-01
In this report the condensed history Monte Carlo simulation of electron transport and its application to the calculation of ion chamber response is discussed. It is shown that the strong step-size dependencies and lack of convergence to the correct answer previously observed are the combined effect of the following artifacts caused by the EGS4/PRESTA implementation of the condensed history technique: dose underprediction due to PRESTA'S pathlength correction and lateral correlation algorithm; dose overprediction due to the boundary crossing algorithm; dose overprediction due to the breakdown of the fictitious cross section method for sampling distances between discrete interaction and the inaccurate evaluation of energy-dependent quantities. These artifacts are now understood quantitatively and analytical expressions for their effect are given.
Transport of photons produced by lightning in clouds
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard
1991-01-01
The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adriano Junior, L.; Fonseca, T. L.; Castro, M. A.
2016-06-21
Theoretical results for the absorption spectrum and electric properties of the enol and keto tautomeric forms of anil derivatives in the gas-phase and in solution are presented. The electronic properties in chloroform, acetonitrile, methanol, and water were determined by carrying out sequential Monte Carlo simulations and quantum mechanics calculations based on the time dependent density functional theory and on the second-order Møller–Plesset perturbation theory method. The results illustrate the role played by electrostatic interactions in the electronic properties of anil derivatives in a liquid environment. There is a significant increase of the dipole moment in solution (20%-100%) relative to themore » gas-phase value. Solvent effects are mild for the absorption spectrum and linear polarizability but they can be particularly important for first hyperpolarizability. A large first hyperpolarizability contrast between the enol and keto forms is observed when absorption spectra present intense lowest-energy absorption bands. Dynamic results for the first hyperpolarizability are in qualitative agreement with the available experimental results.« less
Uludag, K; Kohl, M; Steinbrink, J; Obrig, H; Villringer, A
2002-01-01
Using the modified Lambert-Beer law to analyze attenuation changes measured noninvasively during functional activation of the brain might result in an insufficient separation of chromophore changes ("cross talk") due to the wavelength dependence of the partial path length of photons in the activated volume of the head. The partial path length was estimated by performing Monte Carlo simulations on layered head models. When assuming cortical activation (e.g., in the depth of 8-12 mm), we determine negligible cross talk when considering changes in oxygenated and deoxygenated hemoglobin. But additionally taking changes in the redox state of cytochrome-c-oxidase into account, this analysis results in significant artifacts. An analysis developed for changes in mean time of flight--instead of changes in attenuation--reduces the cross talk for the layers of cortical activation. These results were validated for different oxygen saturations, wavelength combinations and scattering coefficients. For the analysis of changes in oxygenated and deoxygenated hemoglobin only, low cross talk was also found when the activated volume was assumed to be a 4-mm-diam sphere.
Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.
Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut
2007-04-20
We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.
Monte-Carlo simulations of the clean and disordered contact process in three space dimensions
NASA Astrophysics Data System (ADS)
Vojta, Thomas
2013-03-01
The absorbing-state transition in the three-dimensional contact process with and without quenched randomness is investigated by means of Monte-Carlo simulations. In the clean case, a reweighting technique is combined with a careful extrapolation of the data to infinite time to determine with high accuracy the critical behavior in the three-dimensional directed percolation universality class. In the presence of quenched spatial disorder, our data demonstrate that the absorbing-state transition is governed by an unconventional infinite-randomness critical point featuring activated dynamical scaling. The critical behavior of this transition does not depend on the disorder strength, i.e., it is universal. Close to the disordered critical point, the dynamics is characterized by the nonuniversal power laws typical of a Griffiths phase. We compare our findings to the results of other numerical methods, and we relate them to a general classification of phase transitions in disordered systems based on the rare region dimensionality. This work has been supported in part by the NSF under grants no. DMR-0906566 and DMR-1205803.
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2018-02-01
Monte Carlo method is applied to the study of relaxation of excited electron-hole (e-h) pairs in graphene. The presence of background of spin-polarized electrons, with high density imposing degeneracy conditions, is assumed. To such system, a number of e-h pairs with spin polarization parallel or antiparallel to the background is injected. Two stages of relaxation: thermalization and cooling are clearly distinguished when average particles energy < E> and its standard deviation σ _E are examined. At the very beginning of thermalization phase, holes loose energy to electrons, and after this process is substantially completed, particle distributions reorganize to take a Fermi-Dirac shape. To describe the evolution of < E > and σ _E during thermalization, we define characteristic times τ _ {th} and values at the end of thermalization E_ {th} and σ _ {th}. The dependence of these parameters on various conditions, such as temperature and background density, is presented. It is shown that among the considered parameters, only the standard deviation of electrons energy allows to distinguish between different cases of relative spin polarizations of background and excited electrons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javadi, M.; Abdi, Y., E-mail: y.abdi@ut.ac.ir
2015-08-14
Monte Carlo continuous time random walk simulation is used to study the effects of confinement on electron transport, in porous TiO{sub 2}. In this work, we have introduced a columnar structure instead of the thick layer of porous TiO{sub 2} used as anode in conventional dye solar cells. Our simulation results show that electron diffusion coefficient in the proposed columnar structure is significantly higher than the diffusion coefficient in the conventional structure. It is shown that electron diffusion in the columnar structure depends both on the cross section area of the columns and the porosity of the structure. Also, wemore » demonstrate that such enhanced electron diffusion can be realized in the columnar photo-electrodes with a cross sectional area of ∼1 μm{sup 2} and porosity of 55%, by a simple and low cost fabrication process. Our results open up a promising approach to achieve solar cells with higher efficiencies by engineering the photo-electrode structure.« less
NASA Astrophysics Data System (ADS)
Javadi, M.; Abdi, Y.
2015-08-01
Monte Carlo continuous time random walk simulation is used to study the effects of confinement on electron transport, in porous TiO2. In this work, we have introduced a columnar structure instead of the thick layer of porous TiO2 used as anode in conventional dye solar cells. Our simulation results show that electron diffusion coefficient in the proposed columnar structure is significantly higher than the diffusion coefficient in the conventional structure. It is shown that electron diffusion in the columnar structure depends both on the cross section area of the columns and the porosity of the structure. Also, we demonstrate that such enhanced electron diffusion can be realized in the columnar photo-electrodes with a cross sectional area of ˜1 μm2 and porosity of 55%, by a simple and low cost fabrication process. Our results open up a promising approach to achieve solar cells with higher efficiencies by engineering the photo-electrode structure.
Designing new guides and instruments using McStas
NASA Astrophysics Data System (ADS)
Farhi, E.; Hansen, T.; Wildes, A.; Ghosh, R.; Lefmann, K.
With the increasing complexity of modern neutron-scattering instruments, the need for powerful tools to optimize their geometry and physical performances (flux, resolution, divergence, etc.) has become essential. As the usual analytical methods reach their limit of validity in the description of fine effects, the use of Monte Carlo simulations, which can handle these latter, has become widespread. The McStas program was developed at Riso National Laboratory in order to provide neutron scattering instrument scientists with an efficient and flexible tool for building Monte Carlo simulations of guides, neutron optics and instruments [1]. To date, the McStas package has been extensively used at the Institut Laue-Langevin, Grenoble, France, for various studies including cold and thermal guides with ballistic geometry, diffractometers, triple-axis, backscattering and time-of-flight spectrometers [2]. In this paper, we present some simulation results concerning different guide geometries that may be used in the future at the Institut Laue-Langevin. Gain factors ranging from two to five may be obtained for the integrated intensities, depending on the exact geometry, the guide coatings and the source.
Monolayers of hard rods on planar substrates. II. Growth
NASA Astrophysics Data System (ADS)
Klopotek, M.; Hansen-Goos, H.; Dixit, M.; Schilling, T.; Schreiber, F.; Oettel, M.
2017-02-01
Growth of hard-rod monolayers via deposition is studied in a lattice model using rods with discrete orientations and in a continuum model with hard spherocylinders. The lattice model is treated with kinetic Monte Carlo simulations and dynamic density functional theory while the continuum model is studied by dynamic Monte Carlo simulations equivalent to diffusive dynamics. The evolution of nematic order (excess of upright particles, "standing-up" transition) is an entropic effect and is mainly governed by the equilibrium solution, rendering a continuous transition [Paper I, M. Oettel et al., J. Chem. Phys. 145, 074902 (2016)]. Strong non-equilibrium effects (e.g., a noticeable dependence on the ratio of rates for translational and rotational moves) are found for attractive substrate potentials favoring lying rods. Results from the lattice and the continuum models agree qualitatively if the relevant characteristic times for diffusion, relaxation of nematic order, and deposition are matched properly. Applicability of these monolayer results to multilayer growth is discussed for a continuum-model realization in three dimensions where spherocylinders are deposited continuously onto a substrate via diffusion.
MODELING TIME DISPERSION DUE TO OPTICAL PATH LENGTH DIFFERENCES IN SCINTILLATION DETECTORS*
Moses, W.W.; Choong, W.-S.; Derenzo, S.E.
2015-01-01
We characterize the nature of the time dispersion in scintillation detectors caused by path length differences of the scintillation photons as they travel from their generation point to the photodetector. Using Monte Carlo simulation, we find that the initial portion of the distribution (which is the only portion that affects the timing resolution) can usually be modeled by an exponential decay. The peak amplitude and decay time depend both on the geometry of the crystal, the position within the crystal that the scintillation light originates from, and the surface finish. In a rectangular parallelpiped LSO crystal with 3 mm × 3 mm cross section and polished surfaces, the decay time ranges from 10 ps (for interactions 1 mm from the photodetector) up to 80 ps (for interactions 50 mm from the photodetector). Over that same range of distances, the peak amplitude ranges from 100% (defined as the peak amplitude for interactions 1 mm from the photodetector) down to 4% for interactions 50 mm from the photodetector. Higher values for the decay time are obtained for rough surfaces, but the exact value depends on the simulation details. Estimates for the decay time and peak amplitude can be made for different cross section sizes via simple scaling arguments. PMID:25729464
Modeling Time Dispersion Due to Optical Path Length Differences in Scintillation Detectors
Moses, W. W.; Choong, W. -S.; Derenzo, S. E.
2014-08-20
In this paper, we characterize the nature of the time dispersion in scintillation detectors caused by path length differences of the scintillation photons as they travel from their generation point to the photodetector. Using Monte Carlo simulation, we find that the initial portion of the distribution (which is the only portion that affects the timing resolution) can usually be modeled by an exponential decay. The peak amplitude and decay time depend both on the geometry of the crystal, the position within the crystal that the scintillation light originates from, and the surface finish. In a rectangular parallelpiped LSO crystal withmore » 3 mm × 3 mm cross section and polished surfaces, the decay time ranges from 10 ps (for interactions 1 mm from the photodetector) up to 80 ps (for interactions 50 mm from the photodetector). Over that same range of distances, the peak amplitude ranges from 100% (defined as the peak amplitude for interactions 1 mm from the photodetector) down to 4% for interactions 50 mm from the photodetector. Higher values for the decay time are obtained for rough surfaces, but the exact value depends on the simulation details. Finally, estimates for the decay time and peak amplitude can be made for different cross section sizes via simple scaling arguments.« less
Ortiz-Rascón, E; Bruce, N C; Rodríguez-Rosales, A A; Garduño-Mejía, J
2016-03-01
We describe the behavior of linearity in diffuse imaging by evaluating the differences between time-resolved images produced by photons arriving at the detector at different times. Two approaches are considered: Monte Carlo simulations and experimental results. The images of two complete opaque bars embedded in a transparent or in a turbid medium with a slab geometry are analyzed; the optical properties of the turbid medium sample are close to those of breast tissue. A simple linearity test was designed involving a direct comparison between the intensity profile produced by two bars scanned at the same time and the intensity profile obtained by adding two profiles of each bar scanned one at a time. It is shown that the linearity improves substantially when short time of flight photons are used in the imaging process, but even then the nonlinear behavior prevails. As the edge response function (ERF) has been used widely for testing the spatial resolution in imaging systems, the main implication of a time dependent linearity is the weakness of the linearity assumption when evaluating the spatial resolution through the ERF in diffuse imaging systems, and the need to evaluate the spatial resolution by other methods.
Quantum Monte Carlo Simulation of Frustrated Kondo Lattice Models
NASA Astrophysics Data System (ADS)
Sato, Toshihiro; Assaad, Fakher F.; Grover, Tarun
2018-03-01
The absence of the negative sign problem in quantum Monte Carlo simulations of spin and fermion systems has different origins. World-line based algorithms for spins require positivity of matrix elements whereas auxiliary field approaches for fermions depend on symmetries such as particle-hole symmetry. For negative-sign-free spin and fermionic systems, we show that one can formulate a negative-sign-free auxiliary field quantum Monte Carlo algorithm that allows Kondo coupling of fermions with the spins. Using this general approach, we study a half-filled Kondo lattice model on the honeycomb lattice with geometric frustration. In addition to the conventional Kondo insulator and antiferromagnetically ordered phases, we find a partial Kondo screened state where spins are selectively screened so as to alleviate frustration, and the lattice rotation symmetry is broken nematically.
Accurate simulations of helium pick-up experiments using a rejection-free Monte Carlo method
NASA Astrophysics Data System (ADS)
Dutra, Matthew; Hinde, Robert
2018-04-01
In this paper, we present Monte Carlo simulations of helium droplet pick-up experiments with the intention of developing a robust and accurate theoretical approach for interpreting experimental helium droplet calorimetry data. Our approach is capable of capturing the evaporative behavior of helium droplets following dopant acquisition, allowing for a more realistic description of the pick-up process. Furthermore, we circumvent the traditional assumption of bulk helium behavior by utilizing density functional calculations of the size-dependent helium droplet chemical potential. The results of this new Monte Carlo technique are compared to commonly used Poisson pick-up statistics for simulations that reflect a broad range of experimental parameters. We conclude by offering an assessment of both of these theoretical approaches in the context of our observed results.
Optimization of Aimpoints for Coordinate Seeking Weapons
2015-09-01
aiming) and independent ( ballistic ) errors are taken into account, before utilizing each of the three damage functions representing the weapon. A Monte...characteristics such as the radius of the circle containing the weapon aimpoint, impact angle, dependent (aiming) and independent ( ballistic ) errors are taken...Dependent (Aiming) Error .................................8 2. Single Weapon Independent ( Ballistic ) Error .............................9 3
QuTiP: An open-source Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2012-08-01
We present an object-oriented open-source framework for solving the dynamics of open quantum systems written in Python. Arbitrary Hamiltonians, including time-dependent systems, may be built up from operators and states defined by a quantum object class, and then passed on to a choice of master equation or Monte Carlo solvers. We give an overview of the basic structure for the framework before detailing the numerical simulation of open system dynamics. Several examples are given to illustrate the build up to a complete calculation. Finally, we measure the performance of our library against that of current implementations. The framework described here is particularly well suited to the fields of quantum optics, superconducting circuit devices, nanomechanics, and trapped ions, while also being ideal for use in classroom instruction. Catalogue identifier: AEMB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 16 482 No. of bytes in distributed program, including test data, etc.: 213 438 Distribution format: tar.gz Programming language: Python Computer: i386, x86-64 Operating system: Linux, Mac OSX, Windows RAM: 2+ Gigabytes Classification: 7 External routines: NumPy (http://numpy.scipy.org/), SciPy (http://www.scipy.org/), Matplotlib (http://matplotlib.sourceforge.net/) Nature of problem: Dynamics of open quantum systems. Solution method: Numerical solutions to Lindblad master equation or Monte Carlo wave function method. Restrictions: Problems must meet the criteria for using the master equation in Lindblad form. Running time: A few seconds up to several tens of minutes, depending on size of underlying Hilbert space.
Dose calculations using artificial neural networks: A feasibility study for photon beams
NASA Astrophysics Data System (ADS)
Vasseur, Aurélien; Makovicka, Libor; Martin, Éric; Sauget, Marc; Contassot-Vivier, Sylvain; Bahi, Jacques
2008-04-01
Direct dose calculations are a crucial requirement for Treatment Planning Systems. Some methods, such as Monte Carlo, explicitly model particle transport, others depend upon tabulated data or analytic formulae. However, their computation time is too lengthy for clinical use, or accuracy is insufficient, especially for recent techniques such as Intensity-Modulated Radiotherapy. Based on artificial neural networks (ANNs), a new solution is proposed and this work extends the properties of such an algorithm and is called NeuRad. Prior to any calculations, a first phase known as the learning process is necessary. Monte Carlo dose distributions in homogeneous media are used, and the ANN is then acquired. According to the training base, it can be used as a dose engine for either heterogeneous media or for an unknown material. In this report, two networks were created in order to compute dose distribution within a homogeneous phantom made of an unknown material and within an inhomogeneous phantom made of water and TA6V4 (titanium alloy corresponding to hip prosthesis). All NeuRad results were compared to Monte Carlo distributions. The latter required about 7 h on a dedicated cluster (10 nodes). NeuRad learning requires between 8 and 18 h (depending upon the size of the training base) on a single low-end computer. However, the results of dose computation with the ANN are available in less than 2 s, again using a low-end computer, for a 150×1×150 voxels phantom. In the case of homogeneous medium, the mean deviation in the high dose region was less than 1.7%. With a TA6V4 hip prosthesis bathed in water, the mean deviation in the high dose region was less than 4.1%. Further improvements in NeuRad will have to include full 3D calculations, inhomogeneity management and input definitions.
NASA Astrophysics Data System (ADS)
Sengupta, D.; Gao, L.; Wilcox, E. M.; Beres, N. D.; Moosmüller, H.; Khlystov, A.
2017-12-01
Radiative forcing and climate change greatly depends on earth's surface albedo and its temporal and spatial variation. The surface albedo varies greatly depending on the surface characteristics ranging from 5-10% for calm ocean waters to 80% for some snow-covered areas. Clean and fresh snow surfaces have the highest albedo and are most sensitive to contamination with light absorbing impurities that can greatly reduce surface albedo and change overall radiative forcing estimates. Accurate estimation of snow albedo as well as understanding of feedbacks on climate from changes in snow-covered areas is important for radiative forcing, snow energy balance, predicting seasonal snowmelt, and run off rates. Such information is essential to inform timely decision making of stakeholders and policy makers. Light absorbing particles deposited onto the snow surface can greatly alter snow albedo and have been identified as a major contributor to regional climate forcing if seasonal snow cover is involved. However, uncertainty associated with quantification of albedo reduction by these light absorbing particles is high. Here, we use Mie theory (under the assumption of spherical snow grains) to reconstruct the single scattering parameters of snow (i.e., single scattering albedo ῶ and asymmetry parameter g) from observation-based size distribution information and retrieved refractive index values. The single scattering parameters of impurities are extracted with the same approach from datasets obtained during laboratory combustion of biomass samples. Instead of using plane-parallel approximation methods to account for multiple scattering, we have used the simple "Monte Carlo ray/photon tracing approach" to calculate the snow albedo. This simple approach considers multiple scattering to be the "collection" of single scattering events. Using this approach, we vary the effective snow grain size and impurity concentrations to explore the evolution of snow albedo over a wide wavelength range (300 nm - 2000 nm). Results will be compared with the SNICAR model to better understand the differences in snow albedo computation between plane-parallel methods and the statistical Monte Carlo methods.
Polynomial complexity despite the fermionic sign
NASA Astrophysics Data System (ADS)
Rossi, R.; Prokof'ev, N.; Svistunov, B.; Van Houcke, K.; Werner, F.
2017-04-01
It is commonly believed that in unbiased quantum Monte Carlo approaches to fermionic many-body problems, the infamous sign problem generically implies prohibitively large computational times for obtaining thermodynamic-limit quantities. We point out that for convergent Feynman diagrammatic series evaluated with a recently introduced Monte Carlo algorithm (see Rossi R., arXiv:1612.05184), the computational time increases only polynomially with the inverse error on thermodynamic-limit quantities.
Real-time dynamics of matrix quantum mechanics beyond the classical approximation
NASA Astrophysics Data System (ADS)
Buividovich, Pavel; Hanada, Masanori; Schäfer, Andreas
2018-03-01
We describe a numerical method which allows to go beyond the classical approximation for the real-time dynamics of many-body systems by approximating the many-body Wigner function by the most general Gaussian function with time-dependent mean and dispersion. On a simple example of a classically chaotic system with two degrees of freedom we demonstrate that this Gaussian state approximation is accurate for significantly smaller field strengths and longer times than the classical one. Applying this approximation to matrix quantum mechanics, we demonstrate that the quantum Lyapunov exponents are in general smaller than their classical counterparts, and even seem to vanish below some temperature. This behavior resembles the finite-temperature phase transition which was found for this system in Monte-Carlo simulations, and ensures that the system does not violate the Maldacena-Shenker-Stanford bound λL < 2πT, which inevitably happens for classical dynamics at sufficiently small temperatures.
Directional change of fluid particles in two-dimensional turbulence and of football players
NASA Astrophysics Data System (ADS)
Kadoch, Benjamin; Bos, Wouter J. T.; Schneider, Kai
2017-06-01
Multiscale directional statistics are investigated in two-dimensional incompressible turbulence. It is shown that the short-time behavior of the mean angle of directional change of fluid particles is linearly dependent on the time lag and that no inertial range behavior is observed in the directional change associated with the enstrophy-cascade range. In simulations of the inverse-cascade range, the directional change shows a power law behavior at inertial range time scales. By comparing the directional change in space-periodic and wall-bounded flow, it is shown that the probability density function of the directional change at long times carries the signature of the confinement. The geometrical origin of this effect is validated by Monte Carlo simulations. The same effect is also observed in the directional statistics computed from the trajectories of football players (soccer players in American English).
Lévy walks with variable waiting time: A ballistic case
NASA Astrophysics Data System (ADS)
Kamińska, A.; Srokowski, T.
2018-06-01
The Lévy walk process for a lower interval of an excursion times distribution (α <1 ) is discussed. The particle rests between the jumps, and the waiting time is position-dependent. Two cases are considered: a rising and diminishing waiting time rate ν (x ) , which require different approximations of the master equation. The process comprises two phases of the motion: particles at rest and in flight. The density distributions for them are derived, as a solution of corresponding fractional equations. For strongly falling ν (x ) , the resting particles density assumes the α -stable form (truncated at fronts), and the process resolves itself to the Lévy flights. The diffusion is enhanced for this case but no longer ballistic, in contrast to the case for the rising ν (x ) . The analytical results are compared with Monte Carlo trajectory simulations. The results qualitatively agree with observed properties of human and animal movements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru
We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less
NASA Astrophysics Data System (ADS)
Yu, Bo; Ning, Chao-lie; Li, Bing
2017-03-01
A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.
Manganaro, Lorenzo; Russo, Germano; Cirio, Roberto; Dalmasso, Federico; Giordanengo, Simona; Monaco, Vincenzo; Muraro, Silvia; Sacchi, Roberto; Vignati, Anna; Attili, Andrea
2017-04-01
Advanced ion beam therapeutic techniques, such as hypofractionation, respiratory gating, or laser-based pulsed beams, have dose rate time structures which are substantially different from those found in conventional approaches. The biological impact of the time structure is mediated through the β parameter in the linear quadratic (LQ) model. The aim of this study was to assess the impact of changes in the value of the β parameter on the treatment outcomes, also accounting for noninstantaneous intrafraction dose delivery or fractionation and comparing the effects of using different primary ions. An original formulation of the microdosimetric kinetic model (MKM) is used (named MCt-MKM), in which a Monte Carlo (MC) approach was introduced to account for the stochastic spatio-temporal correlations characteristic of the irradiations and the cellular repair kinetics. A modified version of the kinetic equations, validated on experimental cell survival in vitro data, was also introduced. The model, trained on the HSG cells, was used to evaluate the relative biological effectiveness (RBE) for treatments with acute and protracted fractions. Exemplary cases of prostate cancer irradiated with different ion beams were evaluated to assess the impact of the temporal effects. The LQ parameters for a range of cell lines (V79, HSG, and T1) and ion species (H, He, C, and Ne) were evaluated and compared with the experimental data available in the literature, with good results. Notably, in contrast to the original MKM formulation, the MCt-MKM explicitly predicts an ion and LET-dependent β compatible with observations. The data from a split-dose experiment were used to experimentally determine the value of the parameter related to the cellular repair kinetics. Concerning the clinical case considered, an RBE decrease was observed, depending on the dose, ion, and LET, exceeding up to 3% of the acute value in the case of a protraction in the delivery of 10 min. The intercomparison between different ions shows that the clinical optimality is strongly dependent on a complex interplay between the different physical and biological quantities considered. The present study provides a framework for exploiting the temporal effects of dose delivery. The results show the possibility of optimizing the treatment outcomes accounting for the correlation between the specific dose rate time structure and the spatial characteristic of the LET distribution, depending on the ion type used. © 2017 American Association of Physicists in Medicine.
Monte Carlo simulations of precise timekeeping in the Milstar communication satellite system
NASA Technical Reports Server (NTRS)
Camparo, James C.; Frueholz, R. P.
1995-01-01
The Milstar communications satellite system will provide secure antijam communication capabilities for DOD operations into the next century. In order to accomplish this task, the Milstar system will employ precise timekeeping on its satellites and at its ground control stations. The constellation will consist of four satellites in geosynchronous orbit, each carrying a set of four rubidium (Rb) atomic clocks. Several times a day, during normal operation, the Mission Control Element (MCE) will collect timing information from the constellation, and after several days use this information to update the time and frequency of the satellite clocks. The MCE will maintain precise time with a cesium (Cs) atomic clock, synchronized to UTC(USNO) via a GPS receiver. We have developed a Monte Carlo simulation of Milstar's space segment timekeeping. The simulation includes the effects of: uplink/downlink time transfer noise; satellite crosslink time transfer noise; satellite diurnal temperature variations; satellite and ground station atomic clock noise; and also quantization limits regarding satellite time and frequency corrections. The Monte Carlo simulation capability has proven to be an invaluable tool in assessing the performance characteristics of various timekeeping algorithms proposed for Milstar, and also in highlighting the timekeeping capabilities of the system. Here, we provide a brief overview of the basic Milstar timekeeping architecture as it is presently envisioned. We then describe the Monte Carlo simulation of space segment timekeeping, and provide examples of the simulation's efficacy in resolving timekeeping issues.
Chen, Jin; Venugopal, Vivek; Intes, Xavier
2011-01-01
Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chason, E.; Chan, W. L.; Bharathi, M. S.
Low-energy ion bombardment produces spontaneous periodic structures (sputter ripples) on many surfaces. Continuum theories describe the pattern formation in terms of ion-surface interactions and surface relaxation kinetics, but many features of these models (such as defect concentration) are unknown or difficult to determine. In this work, we present results of kinetic Monte Carlo simulations that model surface evolution using discrete atomistic versions of the physical processes included in the continuum theories. From simulations over a range of parameters, we obtain the dependence of the ripple growth rate, wavelength, and velocity on the ion flux and temperature. The results are discussedmore » in terms of the thermally dependent concentration and diffusivity of ion-induced surface defects. We find that in the early stages of ripple formation the simulation results are surprisingly well described by the predictions of the continuum theory, in spite of simplifying approximations used in the continuum model.« less
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229
Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories
NASA Technical Reports Server (NTRS)
Olds, John; Way, David
2001-01-01
Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial-and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.
The QUELCE Method: Using Change Drivers to Estimate Program Costs
2016-08-01
QUELCE computes a distribution of program costs based on Monte Carlo analysis of program cost drivers—assessed via analyses of dependency structure...possible scenarios. These include a dependency structure matrix to understand the interaction of change drivers for a specific project a...performed by the SEI or by company analysts. From the workshop results, analysts create a dependency structure matrix (DSM) of the change drivers
Heterogeneous network epidemics: real-time growth, variance and extinction of infection.
Ball, Frank; House, Thomas
2017-09-01
Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.
Monte Carlo simulation of nonadiabatic expansion in cometary atmospheres - Halley
NASA Astrophysics Data System (ADS)
Hodges, R. R.
1990-02-01
Monte Carlo methods developed for the characterization of velocity-dependent collision processes and ballistic transports in planetary exospheres form the basis of the present computer simulation of icy comet atmospheres, which iteratively undertakes the simultaneous determination of velocity distribution for five neutral species (water, together with suprathermal OH, H2, O, and H) in a flow regime varying from the hydrodynamic to the ballistic. Experimental data from the neutral mass spectrometer carried by Giotto for its March, 1986 encounter with Halley are compared with a model atmosphere.
Monte Carlo study of the effective Sherman function for electron polarimetry
NASA Astrophysics Data System (ADS)
Drągowski, M.; Włodarczyk, M.; Weber, G.; Ciborowski, J.; Enders, J.; Fritzsche, Y.; Poliszczuk, A.
2016-12-01
The PEBSI Monte Carlo simulation was upgraded towards usefulness for electron Mott polarimetry. The description of Mott scattering was improved and polarisation transfer in Møller scattering was included in the code. An improved agreement was achieved between the simulation and available experimental data for a 100 keV polarised electron beam scattering off gold foils of various thicknesses. The dependence of the effective Sherman function on scattering angle and target thickness, as well as the method of finding optimal conditions for Mott polarimetry measurements were analysed.
GPU accelerated Monte-Carlo simulation of SEM images for metrology
NASA Astrophysics Data System (ADS)
Verduin, T.; Lokhorst, S. R.; Hagen, C. W.
2016-03-01
In this work we address the computation times of numerical studies in dimensional metrology. In particular, full Monte-Carlo simulation programs for scanning electron microscopy (SEM) image acquisition are known to be notoriously slow. Our quest in reducing the computation time of SEM image simulation has led us to investigate the use of graphics processing units (GPUs) for metrology. We have succeeded in creating a full Monte-Carlo simulation program for SEM images, which runs entirely on a GPU. The physical scattering models of this GPU simulator are identical to a previous CPU-based simulator, which includes the dielectric function model for inelastic scattering and also refinements for low-voltage SEM applications. As a case study for the performance, we considered the simulated exposure of a complex feature: an isolated silicon line with rough sidewalls located on a at silicon substrate. The surface of the rough feature is decomposed into 408 012 triangles. We have used an exposure dose of 6 mC/cm2, which corresponds to 6 553 600 primary electrons on average (Poisson distributed). We repeat the simulation for various primary electron energies, 300 eV, 500 eV, 800 eV, 1 keV, 3 keV and 5 keV. At first we run the simulation on a GeForce GTX480 from NVIDIA. The very same simulation is duplicated on our CPU-based program, for which we have used an Intel Xeon X5650. Apart from statistics in the simulation, no difference is found between the CPU and GPU simulated results. The GTX480 generates the images (depending on the primary electron energy) 350 to 425 times faster than a single threaded Intel X5650 CPU. Although this is a tremendous speedup, we actually have not reached the maximum throughput because of the limited amount of available memory on the GTX480. Nevertheless, the speedup enables the fast acquisition of simulated SEM images for metrology. We now have the potential to investigate case studies in CD-SEM metrology, which otherwise would take unreasonable amounts of computation time.
Shore platform downwearing in eastern Canada; A 9-14 year micro-erosion meter record
NASA Astrophysics Data System (ADS)
Trenhaile, Alan S.; Porter, Neil J.
2018-06-01
Downwearing rates (erosion in the vertical plane) were measured with a micro-erosion meter (MEM) in eastern Canada, on an argillacious, sub-horizontal shore platform at Mont Louis in eastern Québec, and on two sloping, basaltic and sandstone platforms at, respectively, Scots Bay and Burntcoat Head in the Bay of Fundy, Nova Scotia. The original data covered a period from 2002 to 2009. This dataset was extended by measurements repeated at surviving MEM stations in 2017, producing records ranging over 9-14 years, depending on when each station was installed. Because of rapid surface downwearing, many of the original MEM stations were inoperable in 2017, especially at Burntcoat Head. Nevertheless, data were obtained from 19 stations at Burntcoat (35% of the 2009 original), 25 at Mont Louis (83% of the original), and 38 at Scots Bay (75% of the original). For the stations at Mont Louis and Scots Bay that were still functioning in 2017, there were no significant differences in rates of downwearing over the shorter (from station installation up to 2009) and extended periods (from installation to 2017). Mean rates of downwearing calculated from all the stations in each area declined through time, however, due to the loss of the more rapidly eroding stations. A simple procedure, which was proposed to compensate for this decrease, produced mean downwearing rates that were broadly similar to those reported over the original measurement period. There were significant relationships between downwearing rates and elevation (R2 = 0.32) and downwearing rates and rock hardness (R2 = 0.41) in the extended record at Scots Bay, and a small but significant relationship between downwearing rates and rock hardness at Mont Louis (R2 = 0.17). Differences in downwearing rates across the platforms suggest that salt weathering and wetting and drying are dominant weathering mechanisms at Scots Bay and Mont Louis. Chemical weathering of the sandstone cementing agent and the premature removal of weathered grains by wave-generated bottom currents may, however, be more important at Burntcoat Head.
NASA Technical Reports Server (NTRS)
Lingenfelter, Richard E.
1989-01-01
Comparisons of Solar Maximum Mission (SMM) observations of gamma-ray line and neutron emission with theoretical calculation of their expected production by flare accelerated ion interactions in the solar atmosphere have led to significant advances in the understanding of solar flare particle acceleration and interaction, as well as the flare process itself. These comparisons have enabled the determination of, not only the total number and energy spectrum of accelerated ions trapped at the sun, but also the ion angular distribution as they interact in the solar atmosphere. The Monte Carlo program was modified to include in the calculations of ion trajectories the effects of both mirroring in converging magnetic fields and of pitch angle scattering. Comparing the results of these calculations with the SMM observations, not only the angular distribution of the interacting ions can be determined, but also the initial angular distribution of the ions at acceleration. The reliable determination of the solar photospheric He-3 abundance is of great importance for understanding nucleosynthesis in the early universe and its implications for cosmology, as well as for the study of the evolution of the sun. It is also essential for the determinations of the spectrum and total number of flare accelerated ions from the SMM/GRS gamma-ray line measurements. Systematic Monte Carlo calculations of the time dependence were made as a function of the He-3 abundance and other variables. A new series of calculations were compared for the time-dependent flux of 2.223 MeV neutron capture line emission and the ratio of the time-integrated flux in the 2.223 MeV line to that in the 4.1 to 6.4 MeV nuclear deexcitation band.
Ligand protons in a frozen solution of copper histidine relax via a T1e-driven three-spin mechanism
NASA Astrophysics Data System (ADS)
Stoll, S.; Epel, B.; Vega, S.; Goldfarb, D.
2007-10-01
Davies electron-nuclear double resonance spectra can exhibit strong asymmetries for long mixing times, short repetition times, and large thermal polarizations. These asymmetries can be used to determine nuclear relaxation rates in paramagnetic systems. Measurements of frozen solutions of copper(L-histidine)2 reveal a strong field dependence of the relaxation rates of the protons in the histidine ligand, increasing from low (g‖) to high (g⊥) field. It is shown that this can be attributed to a concentration-dependent T1e-driven relaxation process involving strongly mixed states of three spins: the histidine proton, the Cu(II) electron spin of the same complex, and another distant electron spin with a resonance frequency differing from the spectrometer frequency approximately by the proton Larmor frequency. The protons relax more efficiently in the g⊥ region, since the number of distant electrons able to participate in this relaxation mechanism is higher than in the g‖ region. Analytical expressions for the associated nuclear polarization decay rate Teen-1 are developed and Monte Carlo simulations are carried out, reproducing both the field and the concentration dependences of the nuclear relaxation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewkow, N. R.; Kharchenko, V.
2014-08-01
The precipitation of energetic neutral atoms, produced through charge exchange collisions between solar wind ions and thermal atmospheric gases, is investigated for the Martian atmosphere. Connections between parameters of precipitating fast ions and resulting escape fluxes, altitude-dependent energy distributions of fast atoms and their coefficients of reflection from the Mars atmosphere, are established using accurate cross sections in Monte Carlo (MC) simulations. Distributions of secondary hot (SH) atoms and molecules, induced by precipitating particles, have been obtained and applied for computations of the non-thermal escape fluxes. A new collisional database on accurate energy-angular-dependent cross sections, required for description of themore » energy-momentum transfer in collisions of precipitating particles and production of non-thermal atmospheric atoms and molecules, is reported with analytic fitting equations. Three-dimensional MC simulations with accurate energy-angular-dependent cross sections have been carried out to track large ensembles of energetic atoms in a time-dependent manner as they propagate into the Martian atmosphere and transfer their energy to the ambient atoms and molecules. Results of the MC simulations on the energy-deposition altitude profiles, reflection coefficients, and time-dependent atmospheric heating, obtained for the isotropic hard sphere and anisotropic quantum cross sections, are compared. Atmospheric heating rates, thermalization depths, altitude profiles of production rates, energy distributions of SH atoms and molecules, and induced escape fluxes have been determined.« less
NASA Astrophysics Data System (ADS)
Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin
2017-07-01
The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.
Response Matrix Monte Carlo for electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.
1990-11-01
A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall
NASA Astrophysics Data System (ADS)
Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.
2017-06-01
Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed
2011-02-20
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
NASA Astrophysics Data System (ADS)
Wollaber, Allan B.; Larsen, Edward W.
2011-02-01
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.
The Performance of Local Dependence Measures with Psychological Data
ERIC Educational Resources Information Center
Houts, Carrie R.; Edwards, Michael C.
2013-01-01
The violation of the assumption of local independence when applying item response theory (IRT) models has been shown to have a negative impact on all estimates obtained from the given model. Numerous indices and statistics have been proposed to aid analysts in the detection of local dependence (LD). A Monte Carlo study was conducted to evaluate…
Impact of Tortuosity on Charge-Carrier Transport in Organic Bulk Heterojunction Blends
NASA Astrophysics Data System (ADS)
Heiber, Michael C.; Kister, Klaus; Baumann, Andreas; Dyakonov, Vladimir; Deibel, Carsten; Nguyen, Thuc-Quyen
2017-11-01
The impact of the tortuosity of the charge-transport pathways through a bulk heterojunction film on the charge-carrier mobility is theoretically investigated using model morphologies and kinetic Monte Carlo simulations. The tortuosity descriptor provides a quantitative metric to characterize the quality of the charge-transport pathways, and model morphologies with controlled domain size and tortuosity are created using an anisotropic domain growth procedure. The tortuosity is found to be dependent on the anisotropy of the domain structure and is highly tunable. Time-of-flight charge-transport simulations on morphologies with a range of tortuosity values reveal that tortuosity can significantly reduce the magnitude of the mobility and the electric-field dependence relative to a neat material. These reductions are found to be further controlled by the energetic disorder and temperature. Most significantly, the sensitivity of the electric-field dependence to the tortuosity can explain the different experimental relationships previously reported, and exploiting this sensitivity could lead to simpler methods for characterizing and optimizing charge transport in organic solar cells.
NASA Astrophysics Data System (ADS)
Boscolo, D.; Krämer, M.; Durante, M.; Fuss, M. C.; Scifoni, E.
2018-04-01
The production, diffusion, and interaction of particle beam induced water-derived radicals is studied with the a pre-chemical and chemical module of the Monte Carlo particle track structure code TRAX, based on a step by step approach. After a description of the model implemented, the chemical evolution of the most important products of water radiolysis is studied for electron, proton, helium, and carbon ion radiation at different energies. The validity of the model is verified by comparing the calculated time and LET dependent yield with experimental data from literature and other simulation approaches.
Phase diagram of dilute cosmic matter
NASA Astrophysics Data System (ADS)
Iwata, Yoritaka
2011-10-01
Enhancement of nuclear pasta formation due to multi-nucleus simultaneous collision is presented based on time-dependent density functional calculations with periodic boundary condition. This calculation corresponds to the situation with density lower than the known low-density existence limit of the nuclear pasta phase. In order to evaluate the contribution from three-nucleus simultaneous collisions inside the cosmic matter, the possibility of multi-nucleus simultaneous collisions is examined by a systematic Monte-Carlo calculation, and the mean free path of a nucleus is obtained. Consequently the low-density existence limit of the nuclear pasta phase is formed to be lower than believed up to now.
Cometary atmospheres: Modeling the spatial distribution of observed neutral radicals
NASA Technical Reports Server (NTRS)
Combi, M. R.
1985-01-01
Progress on modeling the spatial distributions of cometary radicals is described. The Monte Carlo particle-trajectory model was generalized to include the full time dependencies of initial comet expansion velocities, nucleus vaporization rates, photochemical lifetimes and photon emission rates which enter the problem through the comet's changing heliocentric distance and velocity. The effect of multiple collisions in the transition zone from collisional coupling to true free flow were also included. Currently available observations of the spatial distributions of the neutral radicals, as well as the latest available photochemical data were re-evaluated. Preliminary exploratory model results testing the effects of various processes on observable spatial distributions are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laboure, Vincent M., E-mail: vincent.laboure@tamu.edu; McClarren, Ryan G., E-mail: rgm@tamu.edu; Hauck, Cory D., E-mail: hauckc@ornl.gov
2016-09-15
In this work, we provide a fully-implicit implementation of the time-dependent, filtered spherical harmonics (FP{sub N}) equations for non-linear, thermal radiative transfer. We investigate local filtering strategies and analyze the effect of the filter on the conditioning of the system, showing in particular that the filter improves the convergence properties of the iterative solver. We also investigate numerically the rigorous error estimates derived in the linear setting, to determine whether they hold also for the non-linear case. Finally, we simulate a standard test problem on an unstructured mesh and make comparisons with implicit Monte Carlo (IMC) calculations.
Self tuning system for industrial surveillance
Stephan, Wegerich W; Jarman, Kristin K.; Gross, Kenneth C.
2000-01-01
A method and system for automatically establishing operational parameters of a statistical surveillance system. The method and system performs a frequency domain transition on time dependent data, a first Fourier composite is formed, serial correlation is removed, a series of Gaussian whiteness tests are performed along with an autocorrelation test, Fourier coefficients are stored and a second Fourier composite is formed. Pseudorandom noise is added, a Monte Carlo simulation is performed to establish SPRT missed alarm probabilities and tested with a synthesized signal. A false alarm test is then emperically evaluated and if less than a desired target value, then SPRT probabilities are used for performing surveillance.
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less
NUEN-618 Class Project: Actually Implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, R. M.; Brunner, T. A.
2017-12-14
This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.
Dynamic Monte Carlo description of thermal desorption processes
NASA Astrophysics Data System (ADS)
Weinketz, Sieghard
1994-07-01
The applicability of the dynamic Monte Carlo method of Fichthorn and Weinberg, in which the time evolution of a system is described in terms of the absolute number of different microscopic possible events and their associated transition rates, is discussed for the case of thermal desorption simulations. It is shown that the definition of the time increment at each successful event leads naturally to the macroscopic differential equation of desorption, in the case of simple first- and second-order processes in which the only possible events are desorption and diffusion. This equivalence is numerically demonstrated for a second-order case. In the sequence, the equivalence of this method with the Monte Carlo method of Sales and Zgrablich for more complex desorption processes, allowing for lateral interactions between adsorbates, is shown, even though the dynamic Monte Carlo method does not bear their limitation of a rapid surface diffusion condition, thus being able to describe a more complex ``kinetics'' of surface reactive processes, and therefore be applied to a wider class of phenomena, such as surface catalysis.
Metadynamics convergence law in a multidimensional system
NASA Astrophysics Data System (ADS)
Crespo, Yanier; Marinelli, Fabrizio; Pietrucci, Fabio; Laio, Alessandro
2010-05-01
Metadynamics is a powerful sampling technique that uses a nonequilibrium history-dependent process to reconstruct the free-energy surface as a function of the relevant collective variables s . In Bussi [Phys. Rev. Lett. 96, 090601 (2006)] it is proved that, in a Langevin process, metadynamics provides an unbiased estimate of the free energy F(s) . We here study the convergence properties of this approach in a multidimensional system, with a Hamiltonian depending on several variables. Specifically, we show that in a Monte Carlo metadynamics simulation of an Ising model the time average of the history-dependent potential converge to F(s) with the same law of an umbrella sampling performed in optimal conditions (i.e., with a bias exactly equal to the negative of the free energy). Remarkably, after a short transient, the error becomes approximately independent on the filling speed, showing that even in out-of-equilibrium conditions metadynamics allows recovering an accurate estimate of F(s) . These results have been obtained introducing a functional form of the history-dependent potential that avoids the onset of systematic errors near the boundaries of the free-energy landscape.
Metadynamics convergence law in a multidimensional system.
Crespo, Yanier; Marinelli, Fabrizio; Pietrucci, Fabio; Laio, Alessandro
2010-05-01
Metadynamics is a powerful sampling technique that uses a nonequilibrium history-dependent process to reconstruct the free-energy surface as a function of the relevant collective variables s . In Bussi [Phys. Rev. Lett. 96, 090601 (2006)] it is proved that, in a Langevin process, metadynamics provides an unbiased estimate of the free energy F(s) . We here study the convergence properties of this approach in a multidimensional system, with a Hamiltonian depending on several variables. Specifically, we show that in a Monte Carlo metadynamics simulation of an Ising model the time average of the history-dependent potential converge to F(s) with the same law of an umbrella sampling performed in optimal conditions (i.e., with a bias exactly equal to the negative of the free energy). Remarkably, after a short transient, the error becomes approximately independent on the filling speed, showing that even in out-of-equilibrium conditions metadynamics allows recovering an accurate estimate of F(s) . These results have been obtained introducing a functional form of the history-dependent potential that avoids the onset of systematic errors near the boundaries of the free-energy landscape.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Arridge, S R; Dehghani, H; Schweiger, M; Okada, E
2000-01-01
We present a method for handling nonscattering regions within diffusing domains. The method develops from an iterative radiosity-diffusion approach using Green's functions that was computationally slow. Here we present an improved implementation using a finite element method (FEM) that is direct. The fundamental idea is to introduce extra equations into the standard diffusion FEM to represent nondiffusive light propagation across a nonscattering region. By appropriate mesh node ordering the computational time is not much greater than for diffusion alone. We compare results from this method with those from a discrete ordinate transport code, and with Monte Carlo calculations. The agreement is very good, and, in addition, our scheme allows us to easily model time-dependent and frequency domain problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Çatlı, Serap, E-mail: serapcatli@hotmail.com; Tanır, Güneş
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the presentmore » study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.« less
A Modified Monte Carlo Method for Carrier Transport in Germanium, Free of Isotropic Rates
NASA Astrophysics Data System (ADS)
Sundqvist, Kyle
2010-03-01
We present a new method for carrier transport simulation, relevant for high-purity germanium < 100 > at a temperature of 40 mK. In this system, the scattering of electrons and holes is dominated by spontaneous phonon emission. Free carriers are always out of equilibrium with the lattice. We must also properly account for directional effects due to band structure, but there are many cautions in the literature about treating germanium in particular. These objections arise because the germanium electron system is anisotropic to an extreme degree, while standard Monte Carlo algorithms maintain a reliance on isotropic, integrated rates. We re-examine Fermi's Golden Rule to produce a Monte Carlo method free of isotropic rates. Traditional Monte Carlo codes implement particle scattering based on an isotropically averaged rate, followed by a separate selection of the particle's final state via a momentum-dependent probability. In our method, the kernel of Fermi's Golden Rule produces analytical, bivariate rates which allow for the simultaneous choice of scatter and final state selection. Energy and momentum are automatically conserved. We compare our results to experimental data.
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
NASA Astrophysics Data System (ADS)
Gao, Fangfang; Zhang, Xiaokang; Pu, Yong; Zhu, Qingjun; Liu, Songlin
2016-08-01
Attaining tritium self-sufficiency is an important mission for the Chinese Fusion Engineering Testing Reactor (CFETR) operating on a Deuterium-Tritium (D-T) fuel cycle. It is necessary to study the tritium breeding ratio (TBR) and breeding tritium inventory variation with operation time so as to provide an accurate data for dynamic modeling and analysis of the tritium fuel cycle. A water cooled ceramic breeder (WCCB) blanket is one candidate of blanket concepts for the CFETR. Based on the detailed 3D neutronics model of CFETR with the WCCB blanket, the time-dependent TBR and tritium surplus were evaluated by a coupling calculation of the Monte Carlo N-Particle Transport Code (MCNP) and the fusion activation code FISPACT-2007. The results indicated that the TBR and tritium surplus of the WCCB blanket were a function of operation time and fusion power due to the Li consumption in breeder and material activation. In addition, by comparison with the results calculated by using the 3D neutronics model and employing the transfer factor constant from 1D to 3D, it is noted that 1D analysis leads to an over-estimation for the time-dependent tritium breeding capability when fusion power is larger than 1000 MW. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2013GB108004, 2015GB108002, and 2014GB119000), and by National Natural Science Foundation of China (No. 11175207)
SU-F-T-281: Monte Carlo Investigation of Sources of Dosimetric Discrepancies with 2D Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afifi, M; Deiab, N; El-Farrash, A
2016-06-15
Purpose: Intensity modulated radiation therapy (IMRT) poses a number of challenges for properly measuring commissioning data and quality assurance (QA). Understanding the limitations and use of dosimeters to measure these dose distributions is critical to safe IMRT implementation. In this work, we used Monte Carlo simulations to investigate the possible sources of discrepancy between our measurement with 2D array system and our dose calculation using our treatment planning system (TPS). Material and Methods: MCBEAM and MCSIM Monte Carlo codes were used for treatment head simulation and phantom dose calculation. Accurate modeling of a 6MV beam from Varian trilogy machine wasmore » verified by comparing simulated and measured percentage depth doses and profiles. Dose distribution inside the 2D array was calculated using Monte Carlo simulations and our TPS. Then Cross profiles for different field sizes were compared with actual measurements for zero and 90° gantry angle setup. Through the analysis and comparison, we tried to determine the differences and quantify a possible angular calibration factor. Results: Minimum discrepancies was seen in the comparison between the simulated and the measured profiles for the zero gantry angles at all studied field sizes (4×4cm{sup 2}, 10×10cm{sup 2}, 15×15cm{sup 2}, and 20×20cm{sup 2}). Discrepancies between our measurements and calculations increased dramatically for the cross beam profiles at the 90° gantry angle. This could ascribe mainly to the different attenuation caused by the layer of electronics at the base behind the ion chambers in the 2D array. The degree of attenuation will vary depending on the angle of beam incidence. Correction factors were implemented to correct the errors. Conclusion: Monte Carlo modeling of the 2D arrays and the derivation of angular dependence correction factors will allow for improved accuracy of the device for IMRT QA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.
In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernnat, W.; Buck, M.; Mattes, M.
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code ormore » memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)« less
The First Order Correction to the Exit Distribution for Some Random Walks
NASA Astrophysics Data System (ADS)
Kennedy, Tom
2016-07-01
We study three different random walk models on several two-dimensional lattices by Monte Carlo simulations. One is the usual nearest neighbor random walk. Another is the nearest neighbor random walk which is not allowed to backtrack. The final model is the smart kinetic walk. For all three of these models the distribution of the point where the walk exits a simply connected domain D in the plane converges weakly to harmonic measure on partial D as the lattice spacing δ → 0. Let ω (0,\\cdot ;D) be harmonic measure for D, and let ω _δ (0,\\cdot ;D) be the discrete harmonic measure for one of the random walk models. Our definition of the random walk models is unusual in that we average over the orientation of the lattice with respect to the domain. We are interested in the limit of (ω _δ (0,\\cdot ;D)- ω (0,\\cdot ;D))/δ . Our Monte Carlo simulations of the three models lead to the conjecture that this limit equals c_{M,L} ρ _D(z) times Lebesgue measure with respect to arc length along the boundary, where the function ρ _D(z) depends on the domain, but not on the model or lattice, and the constant c_{M,L} depends on the model and on the lattice, but not on the domain. So there is a form of universality for this first order correction. We also give an explicit formula for the conjectured density ρ _D.
Mobit, P
2002-01-01
The energy responses of LiF-TLDs irradiated in megavoltage electron and photon beams have been determined experimentally by many investigators over the past 35 years but the results vary considerably. General cavity theory has been used to model some of the experimental findings but the predictions of these cavity theories differ from each other and from measurements by more than 13%. Recently, two groups or investigators using Monte Carlo simulations and careful experimental techniques showed that the energy response of 1 mm or 2 mm thick LiF-TLD irradiated by megavoltage photon and electron beams is not more than 5% less than unity for low-Z phantom materials like water or Perspex. However, when the depth of irradiation is significantly different from dmax and the TLD size is more than 5 mm, then the energy response is up to 12% less than unity for incident electron beams. Monte Carlo simulations of some of the experiments reported in the literature showed that some of the contradictory experimental results are reproducible with Monte Carlo simulations. Monte Carlo simulations show that the energy response of LiF-TLDs depends on the size of detector used in electron beams, the depth of irradiation and the incident electron energy. Other differences can be attributed to absolute dose determination and precision of the TL technique. Monte Carlo simulations have also been used to evaluate some of the published general cavity theories. The results show that some of the parameters used to evaluate Burlin's general cavity theory are wrong by factor of 3. Despite this, the estimation of the energy response for most clinical situations using Burlin's cavity equation agrees with Monte Carlo simulations within 1%.
TASEP of interacting particles of arbitrary size
NASA Astrophysics Data System (ADS)
Narasimhan, S. L.; Baumgaertner, A.
2017-10-01
A mean-field description of the stationary state behaviour of interacting k-mers performing totally asymmetric exclusion processes (TASEP) on an open lattice segment is presented employing the discrete Takahashi formalism. It is shown how the maximal current and the phase diagram, including triple-points, depend on the strength of repulsive and attractive interactions. We compare the mean-field results with Monte Carlo simulation of three types interacting k-mers: monomers, dimers and trimers. (a) We find that the Takahashi estimates of the maximal current agree quantitatively with those of the Monte Carlo simulation in the absence of interaction as well as in both the the attractive and the strongly repulsive regimes. However, theory and Monte Carlo results disagree in the range of weak repulsion, where the Takahashi estimates of the maximal current show a monotonic behaviour, whereas the Monte Carlo data show a peaking behaviour. It is argued that the peaking of the maximal current is due to a correlated motion of the particles. In the limit of very strong repulsion the theory predicts a universal behavior: th maximal currents of k-mers correspond to that of non-interacting (k+1) -mers; (b) Monte Carlo estimates of the triple-points for monomers, dimers and trimers show an interesting general behaviour : (i) the phase boundaries α * and β* for entry and exit current, respectively, as function of interaction strengths show maxima for α* whereas β * exhibit minima at the same strength; (ii) in the attractive regime, however, the trend is reversed (β * > α * ). The Takahashi estimates of the triple-point for monomers show a similar trend as the Monte Carlo data except for the peaking of α * ; for dimers and trimers, however, the Takahashi estimates show an opposite trend as compared to the Monte Carlo data.
Martelli, Fabrizio; Sassaroli, Angelo; Pifferi, Antonio; Torricelli, Alessandro; Spinelli, Lorenzo; Zaccanti, Giovanni
2007-12-24
The Green's function of the time dependent radiative transfer equation for the semi-infinite medium is derived for the first time by a heuristic approach based on the extrapolated boundary condition and on an almost exact solution for the infinite medium. Monte Carlo simulations performed both in the simple case of isotropic scattering and of an isotropic point-like source, and in the more realistic case of anisotropic scattering and pencil beam source, are used to validate the heuristic Green's function. Except for the very early times, the proposed solution has an excellent accuracy (> 98 % for the isotropic case, and > 97 % for the anisotropic case) significantly better than the diffusion equation. The use of this solution could be extremely useful in the biomedical optics field where it can be directly employed in conditions where the use of the diffusion equation is limited, e.g. small volume samples, high absorption and/or low scattering media, short source-receiver distances and early times. Also it represents a first step to derive tools for other geometries (e.g. slab and slab with inhomogeneities inside) of practical interest for noninvasive spectroscopy and diffuse optical imaging. Moreover the proposed solution can be useful to several research fields where the study of a transport process is fundamental.
Monte Carlo simulation of chemistry following radiolysis with TOPAS-nBio.
Ramos-Méndez, J; Perl, J; Schuemann, J; McNamara, A; Paganetti, H; Faddegon, B
2018-05-17
Simulation of water radiolysis and the subsequent chemistry provides important information on the effect of ionizing radiation on biological material. The Geant4 Monte Carlo toolkit has added chemical processes via the Geant4-DNA project. The TOPAS tool simplifies the modeling of complex radiotherapy applications with Geant4 without requiring advanced computational skills, extending the pool of users. Thus, a new extension to TOPAS, TOPAS-nBio, is under development to facilitate the configuration of track-structure simulations as well as water radiolysis simulations with Geant4-DNA for radiobiological studies. In this work, radiolysis simulations were implemented in TOPAS-nBio. Users may now easily add chemical species and their reactions, and set parameters including branching ratios, dissociation schemes, diffusion coefficients, and reaction rates. In addition, parameters for the chemical stage were re-evaluated and updated from those used by default in Geant4-DNA to improve the accuracy of chemical yields. Simulation results of time-dependent and LET-dependent primary yields G x (chemical species per 100 eV deposited) produced at neutral pH and 25 °C by short track-segments of charged particles were compared to published measurements. The LET range was 0.05-230 keV µm -1 . The calculated G x values for electrons satisfied the material balance equation within 0.3%, similar for protons albeit with long calculation time. A smaller geometry was used to speed up proton and alpha simulations, with an acceptable difference in the balance equation of 1.3%. Available experimental data of time-dependent G-values for [Formula: see text] agreed with simulated results within 7% ± 8% over the entire time range; for [Formula: see text] over the full time range within 3% ± 4%; for H 2 O 2 from 49% ± 7% at earliest stages and 3% ± 12% at saturation. For the LET-dependent G x , the mean ratios to the experimental data were 1.11 ± 0.98, 1.21 ± 1.11, 1.05 ± 0.52, 1.23 ± 0.59 and 1.49 ± 0.63 (1 standard deviation) for [Formula: see text], [Formula: see text], H 2 , H 2 O 2 and [Formula: see text], respectively. In conclusion, radiolysis and subsequent chemistry with Geant4-DNA has been successfully incorporated in TOPAS-nBio. Results are in reasonable agreement with published measured and simulated data.
Effects of changing HOV lane occupancy requirements : El Monte busway case study
DOT National Transportation Integrated Search
2002-06-01
In 1999, the California Legislature passed Senate Bill 63, which lowered the vehicle-occupancy requirement on the El Monte Busway on the San Bernardino (I-10) Freeway from three persons per vehicle (3+) to two persons per vehicle (2+) full time. The ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Use of Fluka to Create Dose Calculations
NASA Technical Reports Server (NTRS)
Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John
2012-01-01
Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.
NASA Astrophysics Data System (ADS)
Liu, Chen; Chen, Jun-Feng; Li, Yun; Chen, Rong-Chang; Asaoka, Sachio; Yuan, Guo-Li
2012-12-01
As the inland waterway transportation developed rapidly in China, the frequency of hazardous chemical leakage accidents is increasing every year. Such pollution to inland river environment has become a world-wide issue. Montmorillonite (Mont) is typical 2:1 layer type silicate clay and due to their special structure, it has been used in organic pollution removal process. In order to improve their ability in pollution adsorption, the pillared Mont was made in this work. Since the common toxic structure in most chemical pollutants is the halogen atom-benzene ring part, we select a typical compound Monochlorobenzene (MCB) as the aim contaminant. In this research, the original Mont, Na-Mont, TiO2 and TiO2-Mont were prepared and used in MCB degradation experiment as catalysts. The influence of catalyst amount, promoter (H2O2) amount, MCB concentration and reaction time to MCB removal rate were studied, respectively in detail.
Simple and Accurate Method for Central Spin Problems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Manolopoulos, David E.
2018-06-01
We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.
NASA Astrophysics Data System (ADS)
Lipan, Ovidiu; Ferwerda, Cameron
2018-02-01
The deterministic Hill function depends only on the average values of molecule numbers. To account for the fluctuations in the molecule numbers, the argument of the Hill function needs to contain the means, the standard deviations, and the correlations. Here we present a method that allows for stochastic Hill functions to be constructed from the dynamical evolution of stochastic biocircuits with specific topologies. These stochastic Hill functions are presented in a closed analytical form so that they can be easily incorporated in models for large genetic regulatory networks. Using a repressive biocircuit as an example, we show by Monte Carlo simulations that the traditional deterministic Hill function inaccurately predicts time of repression by an order of two magnitudes. However, the stochastic Hill function was able to capture the fluctuations and thus accurately predicted the time of repression.
Photo-generated carriers lose energy during extraction from polymer-fullerene solar cells
Melianas, Armantas; Etzold, Fabian; Savenije, Tom J.; Laquai, Frédéric; Inganäs, Olle; Kemerink, Martijn
2015-01-01
In photovoltaic devices, the photo-generated charge carriers are typically assumed to be in thermal equilibrium with the lattice. In conventional materials, this assumption is experimentally justified as carrier thermalization completes before any significant carrier transport has occurred. Here, we demonstrate by unifying time-resolved optical and electrical experiments and Monte Carlo simulations over an exceptionally wide dynamic range that in the case of organic photovoltaic devices, this assumption is invalid. As the photo-generated carriers are transported to the electrodes, a substantial amount of their energy is lost by continuous thermalization in the disorder broadened density of states. Since thermalization occurs downward in energy, carrier motion is boosted by this process, leading to a time-dependent carrier mobility as confirmed by direct experiments. We identify the time and distance scales relevant for carrier extraction and show that the photo-generated carriers are extracted from the operating device before reaching thermal equilibrium. PMID:26537357
Huang, Qiang; Herrmann, Andreas
2012-03-01
Protein folding, stability, and function are usually influenced by pH. And free energy plays a fundamental role in analysis of such pH-dependent properties. Electrostatics-based theoretical framework using dielectric solvent continuum model and solving Poisson-Boltzmann equation numerically has been shown to be very successful in understanding the pH-dependent properties. However, in this approach the exact computation of pH-dependent free energy becomes impractical for proteins possessing more than several tens of ionizable sites (e.g. > 30), because exact evaluation of the partition function requires a summation over a vast number of possible protonation microstates. Here we present a method which computes the free energy using the average energy and the protonation probabilities of ionizable sites obtained by the well-established Monte Carlo sampling procedure. The key feature is to calculate the entropy by using the protonation probabilities. We used this method to examine a well-studied protein (lysozyme) and produced results which agree very well with the exact calculations. Applications to the optimum pH of maximal stability of proteins and protein-DNA interactions have also resulted in good agreement with experimental data. These examples recommend our method for application to the elucidation of the pH-dependent properties of proteins.
Executive report : effects of changing HOV lane occupancy requirements : El Monte busway case study.
DOT National Transportation Integrated Search
2002-09-01
In 1999, the California Legislature passed Senate Bill 63, which lowered the vehicle-occupancy requirement on the El Monte Busway on the San Bernardino (I-10) Freeway from three persons per vehicle (3+) to two persons per vehicle (2+) full time. The ...
Analytical model of coincidence resolving time in TOF-PET
NASA Astrophysics Data System (ADS)
Wieczorek, H.; Thon, A.; Dey, T.; Khanin, V.; Rodnyi, P.
2016-06-01
The coincidence resolving time (CRT) of scintillation detectors is the parameter determining noise reduction in time-of-flight PET. We derive an analytical CRT model based on the statistical distribution of photons for two different prototype scintillators. For the first one, characterized by single exponential decay, CRT is proportional to the decay time and inversely proportional to the number of photons, with a square root dependence on the trigger level. For the second scintillator prototype, characterized by exponential rise and decay, CRT is proportional to the square root of the product of rise time and decay time divided by the doubled number of photons, and it is nearly independent of the trigger level. This theory is verified by measurements of scintillation time constants, light yield and CRT on scintillator sticks. Trapping effects are taken into account by defining an effective decay time. We show that in terms of signal-to-noise ratio, CRT is as important as patient dose, imaging time or PET system sensitivity. The noise reduction effect of better timing resolution is verified and visualized by Monte Carlo simulation of a NEMA image quality phantom.
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.
Vector Mesons in Cold Nuclear Matter
NASA Astrophysics Data System (ADS)
Rodrigues, Tulio E.; Dias de Toledo Arruda-Neto, Joāo
2013-03-01
The attenuation of vector mesons in cold nuclear matter is studied through the mechanism of incoherent photoproduction off complex nuclei. The latter is described via the time-dependent multi-collisional Monte Carlo (MCMC) intranuclear cascade model. The results for the transparency ratios of ω mesons reproduce previous measurements of CB-ELSA/TAPS with an inelastic ωN cross section around 40 mb for ρω ~ 1.1 GeV/c. The corresponding in-medium width (nuclear rest frame) is extracted dinamically from the algorithm and depends on the average nuclear density pN and target nucleus: ~ 49.2 MeV/c2 for carbon (pN 0.114 far-3) and ~ 77.3 MeV/c2 for lead (pN 0.137 far--3). The calculations fail to reproduce the huge absorption observed at JLab assuming the same inelastic cross section and the discrepancy between the two experiments remains a challenge.
On the effective point of measurement in megavoltage photon beams.
Kawrakow, Iwan
2006-06-01
This paper presents a numerical investigation of the effective point of measurement of thimble ionization chambers in megavoltage photon beams using Monte Carlo simulations with the EGSNRC system. It is shown that the effective point of measurement for relative photon beam dosimetry depends on every detail of the chamber design, including the cavity length, the mass density of the wall material, and the size of the central electrode, in addition to the cavity radius. Moreover, the effective point of measurement also depends on the beam quality and the field size. The paper therefore argues that the upstream shift of 0.6 times the cavity radius, recommended in current dosimetry protocols, is inadequate for accurate relative photon beam dosimetry, particularly in the build-up region. On the other hand, once the effective point of measurement is selected appropriately, measured depth-ionization curves can be equated to measured depth-dose curves for all depths within +/- 0.5%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yanhui, E-mail: huangy12@rpi.edu; Schadler, Linda S.
The high field charge injection and transport properties in reinforced silicone dielectrics were investigated by measuring the time-dependent space charge distribution and the current under dc conditions up to the breakdown field and were compared with the properties of other dielectric polymers. It is argued that the energy and spatial distribution of localized electronic states are crucial in determining these properties for polymer dielectrics. Tunneling to localized states likely dominates the charge injection process. A transient transport regime arises due to the relaxation of charge carriers into deep traps at the energy band tails and is successfully verified by amore » Monte Carlo simulation using the multiple-hopping model. The charge carrier mobility is found to be highly heterogeneous due to the non-uniform trapping. The slow moving electron packet exhibits a negative field dependent drift velocity possibly due to the spatial disorder of traps.« less
NASA Astrophysics Data System (ADS)
Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli
2018-04-01
Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.
Monte Carlo simulation of the back-diffusion of electrons in nitrogen
NASA Astrophysics Data System (ADS)
Radmilović-Radjenović, M.; Nina, A.; Nikitović, Ž.
2009-01-01
In this paper, the process of back-diffusion in nitrogen is studied by means of Monte Carlo simulations. In particular we analyze the influence of different aspects of back-diffusion in order to simplify the models of plasma displays, low pressure gas breakdown and detectors of high energy particles. The obtained simulation results show that the escape coefficient depends strongly on the reflection coefficient and the initial energy of electrons. It was also found that the back-diffusion range and number of collisions before returning to the cathode in nitrogen are smaller than those in argon for similar conditions.
Stochastic evaluation of second-order many-body perturbation energies.
Willow, Soohaeng Yoo; Kim, Kwang S; Hirata, So
2012-11-28
With the aid of the Laplace transform, the canonical expression of the second-order many-body perturbation correction to an electronic energy is converted into the sum of two 13-dimensional integrals, the 12-dimensional parts of which are evaluated by Monte Carlo integration. Weight functions are identified that are analytically normalizable, are finite and non-negative everywhere, and share the same singularities as the integrands. They thus generate appropriate distributions of four-electron walkers via the Metropolis algorithm, yielding correlation energies of small molecules within a few mE(h) of the correct values after 10(8) Monte Carlo steps. This algorithm does away with the integral transformation as the hotspot of the usual algorithms, has a far superior size dependence of cost, does not suffer from the sign problem of some quantum Monte Carlo methods, and potentially easily parallelizable and extensible to other more complex electron-correlation theories.
Tringe, J. W.; Ileri, N.; Levie, H. W.; ...
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more » Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less
Wada, Takao; Ueda, Noriaki
2013-01-01
The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature. PMID:23674843
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Prokhorov, Alexander; Prokhorova, Nina I
2012-11-20
We applied the bidirectional reflectance distribution function (BRDF) model consisting of diffuse, quasi-specular, and glossy components to the Monte Carlo modeling of spectral effective emissivities for nonisothermal cavities. A method for extension of a monochromatic three-component (3C) BRDF model to a continuous spectral range is proposed. The initial data for this method are the BRDFs measured in the plane of incidence at a single wavelength and several incidence angles and directional-hemispherical reflectance measured at one incidence angle within a finite spectral range. We proposed the Monte Carlo algorithm for calculation of spectral effective emissivities for nonisothermal cavities whose internal surface is described by the wavelength-dependent 3C BRDF model. The results obtained for a cylindroconical nonisothermal cavity are discussed and compared with results obtained using the conventional specular-diffuse model.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Numerical integration of detector response functions via Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Tafen, De Nyago
2015-02-14
The diffusion of dilute hydrogen in fcc Ni–Al and Ni–Fe binary alloys was examined using kinetic Monte Carlo method with input kinetic parameters obtained from first-principles density functional theory. The simulation involves the implementation of computationally efficient energy barrier model that describes the configuration dependence of the hydrogen hopping. The predicted hydrogen diffusion coefficients in Ni and Ni 89.4Fe 10.6 are compared well with the available experimental data. In Ni–Al, the model predicts lower hydrogen diffusivity compared to that in Ni. Overall, diffusion prefactors and the effective activation energies of H in Ni–Fe and Ni–Al are concentration dependent of themore » alloying element. Furthermore, the changes in their values are the results of the short-range order (nearest-neighbor) effect on the interstitial diffusion of hydrogen in fcc Ni-based alloys.« less
NASA Astrophysics Data System (ADS)
Ram, Farangis; De Graef, Marc
2018-04-01
In an electron backscatter diffraction pattern (EBSP), the angular distribution of backscattered electrons (BSEs) depends on their energy. Monte Carlo modeling of their depth and energy distributions suggests that the highest energy BSEs are more likely to hit the bottom of the detector than the top. In this paper, we examine experimental EBSPs to validate the modeled angular BSE distribution. To that end, the Kikuchi bandlet method is employed to measure the width of Kikuchi bands in both modeled and measured EBSPs. The results show that in an EBSP obtained with a 15 keV primary probe, the width of a Kikuchi band varies by about 0 .4∘ from the bottom of the EBSD detector to its top. The same is true for a simulated pattern that is composed of BSEs with 5 keV to 15 keV energies, which validates the Monte Carlo simulations.
Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water
Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...
2016-08-13
In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less
Fisicaro, G; Pelaz, L; Lopez, P; La Magna, A
2012-09-01
Pulsed laser irradiation of damaged solids promotes ultrafast nonequilibrium kinetics, on the submicrosecond scale, leading to microscopic modifications of the material state. Reliable theoretical predictions of this evolution can be achieved only by simulating particle interactions in the presence of large and transient gradients of the thermal field. We propose a kinetic Monte Carlo (KMC) method for the simulation of damaged systems in the extremely far-from-equilibrium conditions caused by the laser irradiation. The reference systems are nonideal crystals containing point defect excesses, an order of magnitude larger than the equilibrium density, due to a preirradiation ion implantation process. The thermal and, eventual, melting problem is solved within the phase-field methodology, and the numerical solutions for the space- and time-dependent thermal field were then dynamically coupled to the KMC code. The formalism, implementation, and related tests of our computational code are discussed in detail. As an application example we analyze the evolution of the defect system caused by P ion implantation in Si under nanosecond pulsed irradiation. The simulation results suggest a significant annihilation of the implantation damage which can be well controlled by the laser fluence.
Monte Carlo simulations of polyelectrolytes inside viral capsids.
Angelescu, Daniel George; Bruinsma, Robijn; Linse, Per
2006-04-01
Structural features of polyelectrolytes as single-stranded RNA or double-stranded DNA confined inside viral capsids and the thermodynamics of the encapsidation of the polyelectrolyte into the viral capsid have been examined for various polyelectrolyte lengths by using a coarse-grained model solved by Monte Carlo simulations. The capsid was modeled as a spherical shell with embedded charges and the genome as a linear jointed chain of oppositely charged beads, and their sizes corresponded to those of a scaled-down T=3 virus. Counterions were explicitly included, but no salt was added. The encapisdated chain was found to be predominantly located at the inner capsid surface, in a disordered manner for flexible chains and in a spool-like structure for stiff chains. The distribution of the small ions was strongly dependent on the polyelectrolyte-capsid charge ratio. The encapsidation enthalpy was negative and its magnitude decreased with increasing polyelectrolyte length, whereas the encapsidation entropy displayed a maximum when the capsid and polyelectrolyte had equal absolute charge. The encapsidation process remained thermodynamically favorable for genome charges ca. 3.5 times the capsid charge. The chain stiffness had only a relatively weak effect on the thermodynamics of the encapsidation.
Quantum Monte Carlo with very large multideterminant wavefunctions.
Scemama, Anthony; Applencourt, Thomas; Giner, Emmanuel; Caffarel, Michel
2016-07-01
An algorithm to compute efficiently the first two derivatives of (very) large multideterminant wavefunctions for quantum Monte Carlo calculations is presented. The calculation of determinants and their derivatives is performed using the Sherman-Morrison formula for updating the inverse Slater matrix. An improved implementation based on the reduction of the number of column substitutions and on a very efficient implementation of the calculation of the scalar products involved is presented. It is emphasized that multideterminant expansions contain in general a large number of identical spin-specific determinants: for typical configuration interaction-type wavefunctions the number of unique spin-specific determinants Ndetσ ( σ=↑,↓) with a non-negligible weight in the expansion is of order O(Ndet). We show that a careful implementation of the calculation of the Ndet -dependent contributions can make this step negligible enough so that in practice the algorithm scales as the total number of unique spin-specific determinants, Ndet↑+Ndet↓, over a wide range of total number of determinants (here, Ndet up to about one million), thus greatly reducing the total computational cost. Finally, a new truncation scheme for the multideterminant expansion is proposed so that larger expansions can be considered without increasing the computational time. The algorithm is illustrated with all-electron fixed-node diffusion Monte Carlo calculations of the total energy of the chlorine atom. Calculations using a trial wavefunction including about 750,000 determinants with a computational increase of ∼400 compared to a single-determinant calculation are shown to be feasible. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobylev, A.V., E-mail: alexander.bobylev@kau.se; Potapenko, I.F., E-mail: firena@yandex.ru
2013-08-01
Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation processmore » very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(√(ε)), where ε is a parameter of approximation being equivalent to the time step Δt in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.« less
Little, Mark P; Kwon, Deukwoo; Zablotska, Lydia B; Brenner, Alina V; Cahoon, Elizabeth K; Rozhko, Alexander V; Polyanskaya, Olga N; Minenko, Victor F; Golovanov, Ivan; Bouville, André; Drozdovitch, Vladimir
2015-01-01
The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error. We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo) were applied. There was a statistically significant (p<0.001) increasing dose-response for prevalent thyroid cancer, irrespective of regression-adjustment method used. Without adjustment for dose errors the excess odds ratio was 1.51 Gy- (95% CI 0.53, 3.86), which was reduced by 13% when regression-calibration adjustment was used, 1.31 Gy- (95% CI 0.47, 3.31). A Monte Carlo maximum likelihood method yielded an excess odds ratio of 1.48 Gy- (95% CI 0.53, 3.87), about 2% lower than the unadjusted analysis. The Bayesian method yielded a maximum posterior excess odds ratio of 1.16 Gy- (95% BCI 0.20, 4.32), 23% lower than the unadjusted analysis. There were borderline significant (p = 0.053-0.078) indications of downward curvature in the dose response, depending on the adjustment methods used. There were also borderline significant (p = 0.102) modifying effects of gender on the radiation dose trend, but no significant modifying effects of age at time of accident, or age at screening as modifiers of dose response (p>0.2). In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Liu, B; Liang, B
Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less
Gill, Samuel C; Lim, Nathan M; Grinaway, Patrick B; Rustenburg, Ariën S; Fass, Josh; Ross, Gregory A; Chodera, John D; Mobley, David L
2018-05-31
Accurately predicting protein-ligand binding affinities and binding modes is a major goal in computational chemistry, but even the prediction of ligand binding modes in proteins poses major challenges. Here, we focus on solving the binding mode prediction problem for rigid fragments. That is, we focus on computing the dominant placement, conformation, and orientations of a relatively rigid, fragment-like ligand in a receptor, and the populations of the multiple binding modes which may be relevant. This problem is important in its own right, but is even more timely given the recent success of alchemical free energy calculations. Alchemical calculations are increasingly used to predict binding free energies of ligands to receptors. However, the accuracy of these calculations is dependent on proper sampling of the relevant ligand binding modes. Unfortunately, ligand binding modes may often be uncertain, hard to predict, and/or slow to interconvert on simulation time scales, so proper sampling with current techniques can require prohibitively long simulations. We need new methods which dramatically improve sampling of ligand binding modes. Here, we develop and apply a nonequilibrium candidate Monte Carlo (NCMC) method to improve sampling of ligand binding modes. In this technique, the ligand is rotated and subsequently allowed to relax in its new position through alchemical perturbation before accepting or rejecting the rotation and relaxation as a nonequilibrium Monte Carlo move. When applied to a T4 lysozyme model binding system, this NCMC method shows over 2 orders of magnitude improvement in binding mode sampling efficiency compared to a brute force molecular dynamics simulation. This is a first step toward applying this methodology to pharmaceutically relevant binding of fragments and, eventually, drug-like molecules. We are making this approach available via our new Binding modes of ligands using enhanced sampling (BLUES) package which is freely available on GitHub.
Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz
2004-02-01
External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y).
NASA Astrophysics Data System (ADS)
Laffaille, P.; Feunteun, E.; Lefeuvre, J.-C.
2000-10-01
At least 100 fish species are known to be present in the intertidal areas (estuaries, mudflats and salt marshes) of Mont Saint-Michel Bay. These and other comparable shallow marine coastal waters, such as estuaries and lagoons, play a nursery role for many fish species. However, in Europe little attention has been paid to the value of tidal salt marshes for fishes. Between March 1996 and April 1999, 120 tides were sampled in a tidal creek. A total of 31 species were caught. This community was largely dominated by mullets ( Liza ramada represent 87% of the total biomass) and sand gobies ( Pomatoschistus minutus and P. lozanoi represent 82% of the total numbers). These species and also Gasterosteus aculeatus , Syngnathus rostellatus, Dicentrarchus labrax, Mugil spp., Liza aurata and Sprattus sprattus were the most frequent species (>50% of monthly frequency of occurrence). In Europe, salt marshes and their creeks are flooded only during high spring tides. So, fishes only invade this environment during short immersion periods, and no species can be considered as marsh resident. But, the salt marsh was colonized by fish every time the tide reached the creek, and during the short time of flood, dominant fishes fed actively and exploited the high productivity. Nevertheless, this study shows that there is little interannual variation in the fish community and there are three ' seasons ' in the fish fauna of the marsh. Marine straggler and marine estuarine dependent species colonize marshes between spring (recruitment period in the bay) and autumn before returning into deeper adjacent waters. Estuarine fishes are present all year round with maximum abundances in the end of summer. The presence of fishes confirms that this kind of wetland plays an important trophic and nursery role for these species. Differences in densities and stages distribution of these species into Mont Saint-Michel systems (tidal mudflats, estuaries and tidal salt marshes) can reduce the trophic competition.
Bayesian Analysis of Non-Gaussian Long-Range Dependent Processes
NASA Astrophysics Data System (ADS)
Graves, T.; Franzke, C.; Gramacy, R. B.; Watkins, N. W.
2012-12-01
Recent studies have strongly suggested that surface temperatures exhibit long-range dependence (LRD). The presence of LRD would hamper the identification of deterministic trends and the quantification of their significance. It is well established that LRD processes exhibit stochastic trends over rather long periods of time. Thus, accurate methods for discriminating between physical processes that possess long memory and those that do not are an important adjunct to climate modeling. We have used Markov Chain Monte Carlo algorithms to perform a Bayesian analysis of Auto-Regressive Fractionally-Integrated Moving-Average (ARFIMA) processes, which are capable of modeling LRD. Our principal aim is to obtain inference about the long memory parameter, d,with secondary interest in the scale and location parameters. We have developed a reversible-jump method enabling us to integrate over different model forms for the short memory component. We initially assume Gaussianity, and have tested the method on both synthetic and physical time series such as the Central England Temperature. Many physical processes, for example the Faraday time series from Antarctica, are highly non-Gaussian. We have therefore extended this work by weakening the Gaussianity assumption. Specifically, we assume a symmetric α -stable distribution for the innovations. Such processes provide good, flexible, initial models for non-Gaussian processes with long memory. We will present a study of the dependence of the posterior variance σ d of the memory parameter d on the length of the time series considered. This will be compared with equivalent error diagnostics for other measures of d.
An at-site flood estimation method in the context of nonstationarity I. A simulation study
NASA Astrophysics Data System (ADS)
Gado, Tamer A.; Nguyen, Van-Thanh-Van
2016-04-01
The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.
Gray: a ray tracing-based Monte Carlo simulator for PET
NASA Astrophysics Data System (ADS)
Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.
2018-05-01
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
NASA Technical Reports Server (NTRS)
Richardson, Erin; Hays, M. J.; Blackwood, J. M.; Skinner, T.
2014-01-01
The Liquid Propellant Fragment Overpressure Acceleration Model (L-FOAM) is a tool developed by Bangham Engineering Incorporated (BEi) that produces a representative debris cloud from an exploding liquid-propellant launch vehicle. Here it is applied to the Core Stage (CS) of the National Aeronautics and Space Administration (NASA) Space Launch System (SLS launch vehicle). A combination of Probability Density Functions (PDF) based on empirical data from rocket accidents and applicable tests, as well as SLS specific geometry are combined in a MATLAB script to create unique fragment catalogues each time L-FOAM is run-tailored for a Monte Carlo approach for risk analysis. By accelerating the debris catalogue with the BEi blast model for liquid hydrogen / liquid oxygen explosions, the result is a fully integrated code that models the destruction of the CS at a given point in its trajectory and generates hundreds of individual fragment catalogues with initial imparted velocities. The BEi blast model provides the blast size (radius) and strength (overpressure) as probabilities based on empirical data and anchored with analytical work. The coupling of the L-FOAM catalogue with the BEi blast model is validated with a simulation of the Project PYRO S-IV destruct test. When running a Monte Carlo simulation, L-FOAM can accelerate all catalogues with the same blast (mean blast, 2 s blast, etc.), or vary the blast size and strength based on their respective probabilities. L-FOAM then propagates these fragments until impact with the earth. Results from L-FOAM include a description of each fragment (dimensions, weight, ballistic coefficient, type and initial location on the rocket), imparted velocity from the blast, and impact data depending on user desired application. LFOAM application is for both near-field (fragment impact to escaping crew capsule) and far-field (fragment ground impact footprint) safety considerations. The user is thus able to use statistics from a Monte Carlo set of L-FOAM catalogues to quantify risk for a multitude of potential CS destruct scenarios. Examples include the effect of warning time on the survivability of an escaping crew capsule or the maximum fragment velocities generated by the ignition of leaking propellants in internal cavities.
Persistent random walk of cells involving anomalous effects and random death
NASA Astrophysics Data System (ADS)
Fedotov, Sergei; Tan, Abby; Zubarev, Andrey
2015-04-01
The purpose of this paper is to implement a random death process into a persistent random walk model which produces sub-ballistic superdiffusion (Lévy walk). We develop a stochastic two-velocity jump model of cell motility for which the switching rate depends upon the time which the cell has spent moving in one direction. It is assumed that the switching rate is a decreasing function of residence (running) time. This assumption leads to the power law for the velocity switching time distribution. This describes the anomalous persistence of cell motility: the longer the cell moves in one direction, the smaller the switching probability to another direction becomes. We derive master equations for the cell densities with the generalized switching terms involving the tempered fractional material derivatives. We show that the random death of cells has an important implication for the transport process through tempering of the superdiffusive process. In the long-time limit we write stationary master equations in terms of exponentially truncated fractional derivatives in which the rate of death plays the role of tempering of a Lévy jump distribution. We find the upper and lower bounds for the stationary profiles corresponding to the ballistic transport and diffusion with the death-rate-dependent diffusion coefficient. Monte Carlo simulations confirm these bounds.
Thermalization time scales for WIMP capture by the Sun in effective theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widmark, A., E-mail: axel.widmark@fysik.su.se
I study the process of dark matter capture by the Sun, under the assumption of a Weakly Interacting Massive Particle (WIMP), in the framework of non-relativistic effective field theory. Hypothetically, WIMPs from the galactic halo can scatter against atomic nuclei in the solar interior, settle to thermal equilibrium with the solar core and annihilate to produce an observable flux of neutrinos. In particular, I examine the thermalization process using Monte-Carlo integration of WIMP trajectories. I consider WIMPs in a mass range of 10–1000 GeV and WIMP-nucleon interaction operators with different dependence on spin and transferred momentum. I find that themore » density profiles of captured WIMPs are in accordance with a thermal profile described by the Sun's gravitational potential and core temperature. Depending on the operator that governs the interaction, the majority of the thermalization time is spent in either the solar interior or exterior. If normalizing the WIMP-nuclei interaction strength to a specific capture rate, I find that the thermalization time differs at most by 3 orders of magnitude between operators. In most cases of interest, the thermalization time is many orders of magnitude shorter than the age of the solar system.« less
Evaluation of the dosimetric properties of a diode detector for small field proton radiosurgery.
McAuley, Grant A; Teran, Anthony V; Slater, Jerry D; Slater, James M; Wroe, Andrew J
2015-11-08
The small fields and sharp gradients typically encountered in proton radiosurgery require high spatial resolution dosimetric measurements, especially below 1-2 cm diameters. Radiochromic film provides high resolution, but requires postprocessing and special handling. Promising alternatives are diode detectors with small sensitive volumes (SV) that are capable of high resolution and real-time dose acquisition. In this study we evaluated the PTW PR60020 proton dosimetry diode using radiation fields and beam energies relevant to radiosurgery applications. Energies of 127 and 157 MeV (9.7 to 15 cm range) and initial diameters of 8, 10, 12, and 20mm were delivered using single-stage scattering and four modulations (0, 15, 30, and 60mm) to a water tank in our treatment room. Depth dose and beam profile data were compared with PTW Markus N23343 ionization chamber, EBT2 Gafchromic film, and Monte Carlo simulations. Transverse dose profiles were measured using the diode in "edge-on" orientation or EBT2 film. Diode response was linear with respect to dose, uniform with dose rate, and showed an orientation-dependent (i.e., beam parallel to, or perpendicular to, detector axis) response of less than 1%. Diodevs. Markus depth-dose profiles, as well as Markus relative dose ratio vs. simulated dose-weighted average lineal energy plots, suggest that any LET-dependent diode response is negligible from particle entrance up to the very distal portion of the SOBP for the energies tested. Finally, while not possible with the ionization chamber due to partial volume effects, accurate diode depth-dose measurements of 8, 10, and 12 mm diameter beams were obtained compared to Monte Carlo simulations. Because of the small SV that allows measurements without partial volume effects and the capability of submillimeter resolution (in edge-on orientation) that is crucial for small fields and high-dose gradients (e.g., penumbra, distal edge), as well as negligible LET dependence over nearly the full the SOBP, the PTW proton diode proved to be a useful high-resolution, real-time metrology device for small proton field radiation measurements such as would be encountered in radiosurgery applications.
NASA Technical Reports Server (NTRS)
da Silva, Arlindo M.; Norris, Peter M.
2013-01-01
Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.
SU-E-T-455: Characterization of 3D Printed Materials for Proton Beam Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, W; Siderits, R; McKenna, M
2014-06-01
Purpose: The widespread availability of low cost 3D printing technologies provides an alternative fabrication method for customized proton range modifying accessories such as compensators and boluses. However the material properties of the printed object are dependent on the printing technology used. In order to facilitate the application of 3D printing in proton therapy, this study investigated the stopping power of several printed materials using both proton pencil beam measurements and Monte Carlo simulations. Methods: Five 3–4 cm cubes fabricated using three 3D printing technologies (selective laser sintering, fused-deposition modeling and stereolithography) from five printers were investigated. The cubes were scannedmore » on a CT scanner and the depth dose curves for a mono-energetic pencil beam passing through the material were measured using a large parallel plate ion chamber in a water tank. Each cube was measured from two directions (perpendicular and parallel to printing plane) to evaluate the effects of the anisotropic material layout. The results were compared with GEANT4 Monte Carlo simulation using the manufacturer specified material density and chemical composition data. Results: Compared with water, the differences from the range pull back by the printed blocks varied and corresponded well with the material CT Hounsfield unit. The measurement results were in agreement with Monte Carlo simulation. However, depending on the technology, inhomogeneity existed in the printed cubes evidenced from CT images. The effect of such inhomogeneity on the proton beam is to be investigated. Conclusion: Printed blocks by three different 3D printing technologies were characterized for proton beam with measurements and Monte Carlo simulation. The effects of the printing technologies in proton range and stopping power were studied. The derived results can be applied when specific devices are used in proton radiotherapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, DA; Sawakuchi, GO
2014-08-15
In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250more » MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.« less
Multilevel ensemble Kalman filtering
Hoel, Hakon; Law, Kody J. H.; Tempone, Raul
2016-06-14
This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
Scaling properties of multiscale equilibration
NASA Astrophysics Data System (ADS)
Detmold, W.; Endres, M. G.
2018-04-01
We investigate the lattice spacing dependence of the equilibration time for a recently proposed multiscale thermalization algorithm for Markov chain Monte Carlo simulations. The algorithm uses a renormalization-group matched coarse lattice action and prolongation operation to rapidly thermalize decorrelated initial configurations for evolution using a corresponding target lattice action defined at a finer scale. Focusing on nontopological long-distance observables in pure S U (3 ) gauge theory, we provide quantitative evidence that the slow modes of the Markov process, which provide the dominant contribution to the rethermalization time, have a suppressed contribution toward the continuum limit, despite their associated timescales increasing. Based on these numerical investigations, we conjecture that the prolongation operation used herein will produce ensembles that are indistinguishable from the target fine-action distribution for a sufficiently fine coupling at a given level of statistical precision, thereby eliminating the cost of rethermalization.
Promotion of initiated cells by radiation-induced cell inactivation.
Heidenreich, W F; Paretzke, H G
2008-11-01
Cells on the way to carcinogenesis can have a growth advantage relative to normal cells. It has been hypothesized that a radiation-induced growth advantage of these initiated cells might be induced by an increased cell replacement probability of initiated cells after inactivation of neighboring cells by radiation. Here Monte Carlo simulations extend this hypothesis for larger clones: The effective clonal expansion rate decreases with clone size. This effect is stronger for the two-dimensional than for the three-dimensional situation. The clones are irregular, far from a circular shape. An exposure-rate dependence of the effective clonal expansion rate could come in part from a minimal recovery time of the initiated cells for symmetric cell division.
Stability and phase transition of skyrmion crystals generated by Dzyaloshinskii-Moriya interaction
NASA Astrophysics Data System (ADS)
El Hog, Sahbi; Bailly-Reyre, Aurélien; Diep, H. T.
2018-06-01
We generate a crystal of skyrmions in two dimensions using a Heisenberg Hamiltonian including the ferromagnetic interaction J, the Dzyaloshinskii-Moriya interaction D, and an applied magnetic field H. The ground state (GS) is determined by minimizing the interaction energy. We show that the GS is a skyrmion crystal in a region of (D, H) . The stability of this skyrmion crystalline phase at finite temperatures is shown by a study of the time-dependence of the order parameter using Monte Carlo simulations. We observe that the relaxation is very slow and follows a stretched exponential law. The skyrmion crystal phase is shown to undergo a transition to the paramagnetic state at a finite temperature.
The Transport Equation in Optically Thick Media: Discussion of IMC and its Diffusion Limit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szoke, A.; Brooks, E. D.
2016-07-12
We discuss the limits of validity of the Implicit Monte Carlo (IMC) method for the transport of thermally emitted radiation. The weakened coupling between the radiation and material energy of the IMC method causes defects in handling problems with strong transients. We introduce an approach to asymptotic analysis for the transport equation that emphasizes the fact that the radiation and material temperatures are always different in time-dependent problems, and we use it to show that IMC does not produce the correct diffusion limit. As this is a defect of IMC in the continuous equations, no improvement to its discretization canmore » remedy it.« less
NASA Astrophysics Data System (ADS)
Alekseev, V. A.; Krylova, D. D.
1996-02-01
The analytical investigation of Bloch equations is used to describe the main features of the 1D velocity selective coherent population trapping cooling scheme. For the initial stage of cooling the fraction of cooled atoms is derived in the case of a Gaussian initial velocity distribution. At very long times of interaction the fraction of cooled atoms and the velocity distribution function are described by simple analytical formulae and do not depend on the initial distribution. These results are in good agreement with those of Bardou, Bouchaud, Emile, Aspect and Cohen-Tannoudji based on statistical analysis in terms of Levy flights and with Monte-Carlo simulations of the process.
NASA Astrophysics Data System (ADS)
Saverskiy, Aleksandr Y.; Dinca, Dan-Cristian; Rommel, J. Martin
The Intra-Pulse Multi-Energy (IPME) method of material discrimination mitigates main disadvantages of the traditional "interlaced" approach: ambiguity caused by sampling different regions of cargo and reduction of effective scanning speed. A novel concept of creating multi-energy probing pulses using a standing-wave structure allows maintaining a constant energy spectrum while changing the time duration of each sub-pulse and thus enables adaptive cargo inspection. Depending on the cargo density, the dose delivered to the inspected object is optimized for best material discrimination, maximum material penetration, or lowest dose to cargo. A model based on Monte-Carlo simulation and experimental reference points were developed for the optimization of inspection conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond
2016-04-15
Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-07
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy
NASA Astrophysics Data System (ADS)
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-01
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Billman, C.; Harding, A. K.
2013-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
Self-Learning Monte Carlo Method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.
Fixed forced detection for fast SPECT Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.
2018-03-01
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
NASA Technical Reports Server (NTRS)
Pinckney, John
2010-01-01
With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.
Fixed forced detection for fast SPECT Monte-Carlo simulation.
Cajgfinger, T; Rit, S; Létang, J M; Halty, A; Sarrut, D
2018-03-02
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Residence time of symmetric random walkers in a strip with large reflective obstacles
NASA Astrophysics Data System (ADS)
Ciallella, Alessandro; Cirillo, Emilio N. M.; Sohier, Julien
2018-05-01
We study the effect of a large obstacle on the so-called residence time, i.e., the time that a particle performing a symmetric random walk in a rectangular (two-dimensional, 2D) domain needs to cross the strip. We observe complex behavior: We find out that the residence time does not depend monotonically on the geometric properties of the obstacle, such as its width, length, and position. In some cases, due to the presence of the obstacle, the mean residence time is shorter with respect to the one measured for the obstacle-free strip. We explain the residence time behavior by developing a one-dimensional (1D) analog of the 2D model where the role of the obstacle is played by two defect sites having smaller probability to be crossed with respect to all the other regular sites. The 1D and 2D models behave similarly, but in the 1D case we are able to compute exactly the residence time, finding a perfect match with the Monte Carlo simulations.
Stochastic resonance in the majority vote model on regular and small-world lattices
NASA Astrophysics Data System (ADS)
Krawiecki, A.
2017-11-01
The majority vote model with two states on regular and small-world networks is considered under the influence of periodic driving. Monte Carlo simulations show that the time-dependent magnetization, playing the role of the output signal, exhibits maximum periodicity at nonzero values of the internal noise parameter q, which is manifested as the occurrence of the maximum of the spectral power amplification; the location of the maximum depends in a nontrivial way on the amplitude and frequency of the periodic driving as well as on the network topology. This indicates the appearance of stochastic resonance in the system as a function of the intensity of the internal noise. Besides, for low frequencies and for certain narrow ranges of the amplitudes of the periodic driving double maxima of the spectral power amplification as a function of q occur, i.e., stochastic multiresonance appears. The above-mentioned results quantitatively agree with those obtained from numerical simulations of the mean-field equations for the time-dependent magnetization. In contrast, analytic solutions for the spectral power amplification obtained from the latter equations using the linear response approximation deviate significanlty from the numerical results since the effect of the periodic driving on the system is not small even for vanishing amplitude.
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
MC3: Multi-core Markov-chain Monte Carlo code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan
2016-10-01
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.
Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca
2002-09-01
In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly underestimated if the measurement error is not taken into account ('regression dilution bias'). Markov chain Monte Carlo methods may be of great value to dentists in allowing analysis of data sets which exhibit a wide range of different forms of complexity.
Lin, Yuting; McMahon, Stephen J; Scarpelli, Matthew; Paganetti, Harald; Schuemann, Jan
2014-12-21
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6 MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10 nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10 μm away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
Forward neutron production at the Fermilab Main Injector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigmanov, T. S.; Rajaram, D.; Longo, M. J.
2011-01-01
We have measured cross sections for forward neutron production from a variety of targets using proton beams from the Fermilab Main Injector. Measurements were performed for proton beam momenta of 58, 84, and 120 GeV/c. The cross section dependence on the atomic weight (A) of the targets was found to vary as A{sup {alpha}}, where {alpha} is 0.46{+-}0.06 for a beam momentum of 58 GeV/c and 0.54{+-}0.05 for 120 GeV/c. The cross sections show reasonable agreement with FLUKA and DPMJET Monte Carlos. Comparisons have also been made with the LAQGSM Monte Carlo.
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
Monte Carlo verification of radiotherapy treatments with CloudMC.
Miras, Hector; Jiménez, Rubén; Perales, Álvaro; Terrón, José Antonio; Bertolet, Alejandro; Ortiz, Antonio; Macías, José
2018-06-27
A new implementation has been made on CloudMC, a cloud-based platform presented in a previous work, in order to provide services for radiotherapy treatment verification by means of Monte Carlo in a fast, easy and economical way. A description of the architecture of the application and the new developments implemented is presented together with the results of the tests carried out to validate its performance. CloudMC has been developed over Microsoft Azure cloud. It is based on a map/reduce implementation for Monte Carlo calculations distribution over a dynamic cluster of virtual machines in order to reduce calculation time. CloudMC has been updated with new methods to read and process the information related to radiotherapy treatment verification: CT image set, treatment plan, structures and dose distribution files in DICOM format. Some tests have been designed in order to determine, for the different tasks, the most suitable type of virtual machines from those available in Azure. Finally, the performance of Monte Carlo verification in CloudMC is studied through three real cases that involve different treatment techniques, linac models and Monte Carlo codes. Considering computational and economic factors, D1_v2 and G1 virtual machines were selected as the default type for the Worker Roles and the Reducer Role respectively. Calculation times up to 33 min and costs of 16 € were achieved for the verification cases presented when a statistical uncertainty below 2% (2σ) was required. The costs were reduced to 3-6 € when uncertainty requirements are relaxed to 4%. Advantages like high computational power, scalability, easy access and pay-per-usage model, make Monte Carlo cloud-based solutions, like the one presented in this work, an important step forward to solve the long-lived problem of truly introducing the Monte Carlo algorithms in the daily routine of the radiotherapy planning process.
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media
NASA Astrophysics Data System (ADS)
Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique
2017-08-01
NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.
Comparative survey of PAHs incidence in Portuguese traditional meat and blood sausages.
Roseiro, L C; Gomes, A; Patarata, L; Santos, C
2012-06-01
Sixteen polycyclic aromatic hydrocarbons (PAHs) in representative traditional sausages produced in "Trás-os-Montes" and "Alentejo", were determined. Light PAHs represented similar overall contents in both regions and showed close decreasing order patterns (ACY, PHE, FLR and NAP), irrespective of the product type considered. Amongst the carcinogenic/mutagenic PAHs analyzed (PAH8), both regions also had greater contents associated to BaA and CHR, with slightly higher values for the former compound in "Alentejo" and, oppositely, for the later in "Trás-os-Montes". However, their quantitative comparison showed that the general mean total PAH content found in "Trás-os-Montes" was almost 3-fold higher than in similar products from "Alentejo" and this factor was about 8-fold superior when the PAH8 and PAH4 indicators were compared, expressing benzo[a]pyrene toxic equivalencies (BaPE), 15 times (total mean toxicity), 34 times (PAH8) and 9 times (PAH4) higher. In general terms, the mean BaP content of all analyzed samples from "Alentejo" was 0.41 μg kg(-1). Differently that value in "Trás-os-Montes" reached 3.57 μg kg(-1), expressing concerning average contents of 5.35, 5.87 and 4.51 μg kg(-1) in Chouriço de Carne, Moura and Salpicão sausages, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
NASA Astrophysics Data System (ADS)
Iovine, Raffaella Silvia; Fedele, Lorenzo; Mazzeo, Fabio Carmine; Arienzo, Ilenia; Cavallo, Andrea; Wörner, Gerhard; Orsi, Giovanni; Civetta, Lucia; D'Antonio, Massimo
2017-02-01
Barium diffusion chronometry applied to sanidine phenocrysts from the trachytic Agnano-Monte Spina eruption (˜4.7 ka) constrains the time between reactivation and eruption of magma batches in the Campi Flegrei caldera. Backscattered electron imaging and quantitative electron microprobe measurements on 50 sanidine phenocrysts from representative pumice samples document core-to-rim compositional zoning. We focus on compositional breaks near the crystal rims that record magma mixing processes just prior to eruption. Diffusion times were modeled at a magmatic temperature of 930 °C using profiles based on quantitative BaO point analyses, X-ray scans, and grayscale swath profiles, yielding times ≤60 years between mixing and eruption. Such short timescales are consistent with volcanological and geochronological data that indicate that at least six eruptions occurred in the Agnano-San Vito area during few centuries before the Agnano-Monte Spina eruption. Thus, the short diffusion timescales are similar to time intervals between eruptions. Therefore, the rejuvenation time of magma residing in a shallow reservoir after influx of a new magma batch that triggered the eruption, and thus pre-eruption warning times, may be as short as years to a few decades at Campi Flegrei caldera.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.
Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.
2017-10-12
Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
Nedea, S V; van Steenhoven, A A; Markvoort, A J; Spijker, P; Giordano, D
2014-05-01
The influence of gas-surface interactions of a dilute gas confined between two parallel walls on the heat flux predictions is investigated using a combined Monte Carlo (MC) and molecular dynamics (MD) approach. The accommodation coefficients are computed from the temperature of incident and reflected molecules in molecular dynamics and used as effective coefficients in Maxwell-like boundary conditions in Monte Carlo simulations. Hydrophobic and hydrophilic wall interactions are studied, and the effect of the gas-surface interaction potential on the heat flux and other characteristic parameters like density and temperature is shown. The heat flux dependence on the accommodation coefficient is shown for different fluid-wall mass ratios. We find that the accommodation coefficient is increasing considerably when the mass ratio is decreased. An effective map of the heat flux depending on the accommodation coefficient is given and we show that MC heat flux predictions using Maxwell boundary conditions based on the accommodation coefficient give good results when compared to pure molecular dynamics heat predictions. The accommodation coefficients computed for a dilute gas for different gas-wall interaction parameters and mass ratios are transferred to compute the heat flux predictions for a dense gas. Comparison of the heat fluxes derived using explicit MD, MC with Maxwell-like boundary conditions based on the accommodation coefficients, and pure Maxwell boundary conditions are discussed. A map of the heat flux dependence on the accommodation coefficients for a dense gas, and the effective accommodation coefficients for different gas-wall interactions are given. In the end, this approach is applied to study the gas-surface interactions of argon and xenon molecules on a platinum surface. The derived accommodation coefficients are compared with values of experimental results.
Slope stability effects of fuel management strategies – inferences from Monte Carlo simulations
R. M. Rice; R. R. Ziemer; S. C. Hankin
1982-01-01
A simple Monte Carlo simulation evaluated the effect of several fire management strategies on soil slip erosion and wildfires. The current condition was compared to (1) a very intensive fuelbreak system without prescribed fires, and (2) prescribed fire at four time intervals with (a) current fuelbreaks and (b) intensive fuel-breaks. The intensive fuelbreak system...
NASA Astrophysics Data System (ADS)
Deperas-Standylo, Joanna; Gudowska-Nowak, Ewa; Ritter, Sylvia
2014-07-01
Cytogenetic data accumulated from the experiments with peripheral blood lymphocytes exposed to densely ionizing radiation clearly demonstrate that for particles with linear energy transfer (LET) >100 keV/ μm the derived relative biological effectiveness (RBE) will strongly depend on the time point chosen for the analysis. A reasonable prediction of radiation-induced chromosome damage and its distribution among cells can be achieved by exploiting Monte Carlo methodology along with the information about the radius of the penetrating ion-track and the LET of the ion beam. In order to examine the relationship between the track structure and the distribution of aberrations induced in human lymphocytes and to clarify the correlation between delays in the cell cycle progression and the aberration burden visible at the first post-irradiation mitosis, we have analyzed chromosome aberrations in lymphocytes exposed to Fe-ions with LET values of 335 keV/ μm and formulated a Monte Carlo model which reflects time-delay in mitosis of aberrant cells. Within the model the frequency distributions of aberrations among cells follow the pattern of local energy distribution and are well approximated by a time-dependent compound Poisson statistics. The cell-division cycle of undamaged and aberrant cells and chromosome aberrations are modelled as a renewal process represented by a random sum of (independent and identically distributed) random elements S N = ∑ N i=0 X i . Here N stands for the number of particle traversals of cell nucleus, each leading to a statistically independent formation of X i aberrations. The parameter N is itself a random variable and reflects the cell cycle delay of heavily damaged cells. The probability distribution of S N follows a general law for which the moment generating function satisfies the relation Φ S N = Φ N ( Φ X i ). Formulation of the Monte Carlo model which allows to predict expected fluxes of aberrant and non-aberrant cells has been based on several input information: (i) experimentally measured mitotic index in the population of irradiated cells; (ii) scored fraction of cells in first cell cycle; (iii) estimated average number of particle traversals per cell nucleus. By reconstructing the local dose distribution in the biological target, the relevant amount of lesions induced by ions is estimated from the biological effect induced by photons at the same dose level. Moreover, the total amount of aberrations induced within the entire population has been determined. For each subgroup of intact (non-hit) and aberrant cells the cell-division cycle has been analyzed reproducing correctly an expected correlation between mitotic delay and the number of aberrations carried by a cell. This observation is of particular importance for the proper estimation of the biological efficiency of ions and for the estimation of health risks associated with radiation exposure.
Gray: a ray tracing-based Monte Carlo simulator for PET.
Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S
2018-05-21
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
NASA Astrophysics Data System (ADS)
Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald
2017-09-01
In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time
NASA Astrophysics Data System (ADS)
Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.
2017-12-01
We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D; O’Connell, D; Lamb, J
Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment weremore » generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments.« less
Laedermann, Jean-Pascal; Valley, Jean-François; Bulling, Shelley; Bochud, François O
2004-06-01
The detection process used in a commercial dose calibrator was modeled using the GEANT 3 Monte Carlo code. Dose calibrator efficiency for gamma and beta emitters, and the response to monoenergetic photons and electrons was calculated. The model shows that beta emitters below 2.5 MeV deposit energy indirectly in the detector through bremsstrahlung produced in the chamber wall or in the source itself. Higher energy beta emitters (E > 2.5 MeV) deposit energy directly in the chamber sensitive volume, and dose calibrator sensitivity increases abruptly for these radionuclides. The Monte Carlo calculations were compared with gamma and beta emitter measurements. The calculations show that the variation in dose calibrator efficiency with measuring conditions (source volume, container diameter, container wall thickness and material, position of the source within the calibrator) is relatively small and can be considered insignificant for routine measurement applications. However, dose calibrator efficiency depends strongly on the inner-wall thickness of the detector.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin
2014-06-01
Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.
Monte Carlo simulation study of positron generation in ultra-intense laser-solid interactions
NASA Astrophysics Data System (ADS)
Yan, Yonghong; Wu, Yuchi; Zhao, Zongqing; Teng, Jian; Yu, Jinqing; Liu, Dongxiao; Dong, Kegong; Wei, Lai; Fan, Wei; Cao, Leifeng; Yao, Zeen; Gu, Yuqiu
2012-02-01
The Monte Carlo transport code Geant4 has been used to study positron production in the transport of laser-produced hot electrons in solid targets. The dependence of the positron yield on target parameters and the hot-electron temperature has been investigated in thick targets (mm-scale), where only the Bethe-Heitler process is considered. The results show that Au is the best target material, and an optimal target thickness exists for generating abundant positrons at a given hot-electron temperature. The positron angular distributions and energy spectra for different hot electron temperatures were studied without considering the sheath field on the back of the target. The effect of the target rear sheath field for positron acceleration was studied by numerical simulation while including an electrostatic field in the Monte Carlo model. It shows that the positron energy can be enhanced and quasi-monoenergetic positrons are observed owing to the effect of the sheath field.
Quantum Monte Carlo calculations of neutron matter with chiral three-body forces
Tews, I.; Gandolfi, Stefano; Gezerlis, A.; ...
2016-02-02
Chiral effective field theory (EFT) enables a systematic description of low-energy hadronic interactions with controlled theoretical uncertainties. For strongly interacting systems, quantum Monte Carlo (QMC) methods provide some of the most accurate solutions, but they require as input local potentials. We have recently constructed local chiral nucleon-nucleon (NN) interactions up to next-to-next-to-leading order (N 2LO). Chiral EFT naturally predicts consistent many-body forces. In this paper, we consider the leading chiral three-nucleon (3N) interactions in local form. These are included in auxiliary field diffusion Monte Carlo (AFDMC) simulations. We present results for the equation of state of neutron matter and formore » the energies and radii of neutron drops. Specifically, we study the regulator dependence at the Hartree-Fock level and in AFDMC and find that present local regulators lead to less repulsion from 3N forces compared to the usual nonlocal regulators.« less
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and themore » SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Zhiming; Radboud University/NIKHEF, NL-6525 ED Nijmegen
We report on an entropy analysis using Ma's coincidence method on {pi}+p and K+p collisions at {radical}(s) = 22 GeV. A scaling law and additivity properties of Renyi entropies and their charged-particle multiplicity dependence are investigated. The results are compared with those from the PYTHIA Monte Carlo model.
Comparison of structural and least-squares lines for estimating geologic relations
Williams, G.P.; Troutman, B.M.
1990-01-01
Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-04-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-10-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chyzh, A.; Jaffke, P.; Wu, C. Y.
Prompt γ-ray spectra were measured for the spontaneous fission of 240,242Pu and the neutron-induced fission of 239,241Pu with incident neutron energies ranging from thermal to about 100 keV. Measurements were made using the Detector for Advanced Neutron Capture Experiments (DANCE) array in coincidence with the detection of fission fragments using a parallel-plate avalanche counter. The unfolded prompt fission γ-ray energy spectra can be reproduced reasonably well by Monte Carlo Hauser–Feshbach statistical model for the neutron-induced fission channel but not for the spontaneous fission channel. However, this entrance-channel dependence of the prompt fission γ-ray emission can be described qualitatively by themore » model due to the very different fission-fragment mass distributions and a lower average fragment spin for spontaneous fission. The description of measurements and the discussion of results under the framework of a Monte Carlo Hauser–Feshbach statistical approach are presented.« less
Monte Carlo modeling the phase diagram of magnets with the Dzyaloshinskii - Moriya interaction
NASA Astrophysics Data System (ADS)
Belemuk, A. M.; Stishov, S. M.
2017-11-01
We use classical Monte Carlo calculations to model the high-pressure behavior of the phase transition in the helical magnets. We vary values of the exchange interaction constant J and the Dzyaloshinskii-Moriya interaction constant D, which is equivalent to changing spin-spin distances, as occurs in real systems under pressure. The system under study is self-similar at D / J = constant , and its properties are defined by the single variable J / T , where T is temperature. The existence of the first order phase transition critically depends on the ratio D / J . A variation of J strongly affects the phase transition temperature and width of the fluctuation region (the ;hump;) as follows from the system self-similarity. The high-pressure behavior of the spin system depends on the evolution of the interaction constants J and D on compression. Our calculations are relevant to the high pressure phase diagrams of helical magnets MnSi and Cu2OSeO3.
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Stueber, Thomas J.; Norris, Mary Jo
1998-01-01
A Monte Carlo computational model has been developed which simulates atomic oxygen attack of protected polymers at defect sites in the protective coatings. The parameters defining how atomic oxygen interacts with polymers and protective coatings as well as the scattering processes which occur have been optimized to replicate experimental results observed from protected polyimide Kapton on the Long Duration Exposure Facility (LDEF) mission. Computational prediction of atomic oxygen undercutting at defect sites in protective coatings for various arrival energies was investigated. The atomic oxygen undercutting energy dependence predictions enable one to predict mass loss that would occur in low Earth orbit, based on lower energy ground laboratory atomic oxygen beam systems. Results of computational model prediction of undercut cavity size as a function of energy and defect size will be presented to provide insight into expected in-space mass loss of protected polymers with protective coating defects based on lower energy ground laboratory testing.
Chyzh, A.; Jaffke, P.; Wu, C. Y.; ...
2018-06-07
Prompt γ-ray spectra were measured for the spontaneous fission of 240,242Pu and the neutron-induced fission of 239,241Pu with incident neutron energies ranging from thermal to about 100 keV. Measurements were made using the Detector for Advanced Neutron Capture Experiments (DANCE) array in coincidence with the detection of fission fragments using a parallel-plate avalanche counter. The unfolded prompt fission γ-ray energy spectra can be reproduced reasonably well by Monte Carlo Hauser–Feshbach statistical model for the neutron-induced fission channel but not for the spontaneous fission channel. However, this entrance-channel dependence of the prompt fission γ-ray emission can be described qualitatively by themore » model due to the very different fission-fragment mass distributions and a lower average fragment spin for spontaneous fission. The description of measurements and the discussion of results under the framework of a Monte Carlo Hauser–Feshbach statistical approach are presented.« less
Dose control for noncontact laser coagulation of tissue
NASA Astrophysics Data System (ADS)
Roggan, Andre; Albrecht, Hansjoerg; Bocher, Thomas; Rygiel, Reiner; Winter, Harald; Mueller, Gerhard J.
1995-01-01
Nd:YAG lasers emitting at 1064 nm are often used for coagulation of tissue in a non-contact mode, i.e. the treatment of verrucae, endometriosis, tumor coagulation and hemostasis. During this process an uncontrolled temperature rise of the irradiated area leads to vaporization and, finally, to a carbonization of the tissue surface. To prevent this, a dose controlled system was developed using an on-line regulation of the output laser power. The change of the backscattered intensity (remission) of the primary beam was used as a dose dependent feedback parameter. Its dependence on the temperature was determined with a double integrating sphere system and Monte-Carlo simulations. The remission of the tissue was calculated in slab geometry from diffusion theory and Monte-Carlo simulations. The laser control was realized with a PD-circuit and an A/D-converter, enabling the direct connection to the internal bus of the laser system. Preliminary studies with various tissues revealed the practicability of the system.
Computing thermal Wigner densities with the phase integration method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beutier, J.; Borgis, D.; Vuilleumier, R.
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less
A model for simulating random atmospheres as a function of latitude, season, and time
NASA Technical Reports Server (NTRS)
Campbell, J. W.
1977-01-01
An empirical stochastic computer model was developed with the capability of generating random thermodynamic profiles of the atmosphere below an altitude of 99 km which are characteristic of any given season, latitude, and time of day. Samples of temperature, density, and pressure profiles generated by the model are statistically similar to measured profiles in a data base of over 6000 rocket and high-altitude atmospheric soundings; that is, means and standard deviations of modeled profiles and their vertical gradients are in close agreement with data. Model-generated samples can be used for Monte Carlo simulations of aircraft or spacecraft trajectories to predict or account for the effects on a vehicle's performance of atmospheric variability. Other potential uses for the model are in simulating pollutant dispersion patterns, variations in sound propagation, and other phenomena which are dependent on atmospheric properties, and in developing data-reduction software for satellite monitoring systems.
Proteins at the air-water interface in a lattice model
NASA Astrophysics Data System (ADS)
Zhao, Yani; Cieplak, Marek
2018-03-01
We construct a lattice protein version of the hydrophobic-polar model to study the effects of the air-water interface on the protein and on an interfacial layer formed through aggregation of many proteins. The basic unit of the model is a 14-mer that is known to have a unique ground state in three dimensions. The equilibrium and kinetic properties of the systems with and without the interface are studied through a Monte Carlo process. We find that the proteins at high dilution can be pinned and depinned many times from the air-water interface. When pinned, the proteins undergo deformation. The staying time depends on the strength of the coupling to the interface. For dense protein systems, we observe glassy effects. Thus, the lattice model yields results which are similar to those obtained through molecular dynamics in off-lattice models. In addition, we study dynamical effects induced by local temperature gradients in protein films.
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobin, Stephen J.; Lundkvist, Niklas; Goodsell, Alison V.
In this study, Monte Carlo simulations were performed for the differential die-away (DDA) technique to analyse the time-dependent behaviour of the neutron population in fresh and spent nuclear fuel assemblies as part of the Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) Project. Simulations were performed to investigate both a possibly portable as well as a permanent DDA instrument. Taking advantage of a custom made modification to the MCNPX code, the variation in the neutron population, simultaneously in time and space, was examined. The motivation for this research was to improve the design of the DDA instrument, as it is bemore » ing considered for possible deployment at the Central Storage of Spent Nuclear Fuel and Encapsulation Plant in Sweden (Clab), as well as to assist in the interpretation of the both simulated and measured signals.« less
Tobin, Stephen J.; Lundkvist, Niklas; Goodsell, Alison V.; ...
2015-12-01
In this study, Monte Carlo simulations were performed for the differential die-away (DDA) technique to analyse the time-dependent behaviour of the neutron population in fresh and spent nuclear fuel assemblies as part of the Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) Project. Simulations were performed to investigate both a possibly portable as well as a permanent DDA instrument. Taking advantage of a custom made modification to the MCNPX code, the variation in the neutron population, simultaneously in time and space, was examined. The motivation for this research was to improve the design of the DDA instrument, as it is bemore » ing considered for possible deployment at the Central Storage of Spent Nuclear Fuel and Encapsulation Plant in Sweden (Clab), as well as to assist in the interpretation of the both simulated and measured signals.« less
Simulation of 'hitch-hiking' genealogies.
Slade, P F
2001-01-01
An ancestral influence graph is derived, an analogue of the coalescent and a composite of Griffiths' (1991) two-locus ancestral graph and Krone and Neuhauser's (1997) ancestral selection graph. This generalizes their use of branching-coalescing random graphs so as to incorporate both selection and recombination into gene genealogies. Qualitative understanding of a 'hitch-hiking' effect on genealogies is pursued via diagrammatic representation of the genealogical process in a two-locus, two-allele haploid model. Extending the simulation technique of Griffiths and Tavare (1996), computational estimation of expected times to the most recent common ancestor of samples of n genes under recombination and selection in two-locus, two-allele haploid and diploid models are presented. Such times are conditional on sample configuration. Monte Carlo simulations show that 'hitch-hiking' is a subtle effect that alters the conditional expected depth of the genealogy at the linked neutral locus depending on a mutation-selection-recombination balance.
Simulation of selected genealogies.
Slade, P F
2000-02-01
Algorithms for generating genealogies with selection conditional on the sample configuration of n genes in one-locus, two-allele haploid and diploid models are presented. Enhanced integro-recursions using the ancestral selection graph, introduced by S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237), which is the non-neutral analogue of the coalescent, enables accessible simulation of the embedded genealogy. A Monte Carlo simulation scheme based on that of R. C. Griffiths and S. Tavaré (1996, Math. Comput. Modelling 23, 141-158), is adopted to consider the estimation of ancestral times under selection. Simulations show that selection alters the expected depth of the conditional ancestral trees, depending on a mutation-selection balance. As a consequence, branch lengths are shown to be an ineffective criterion for detecting the presence of selection. Several examples are given which quantify the effects of selection on the conditional expected time to the most recent common ancestor. Copyright 2000 Academic Press.
Extinction phase transitions in a model of ecological and evolutionary dynamics
NASA Astrophysics Data System (ADS)
Barghathi, Hatem; Tackkett, Skye; Vojta, Thomas
2017-07-01
We study the non-equilibrium phase transition between survival and extinction of spatially extended biological populations using an agent-based model. We especially focus on the effects of global temporal fluctuations of the environmental conditions, i.e., temporal disorder. Using large-scale Monte-Carlo simulations of up to 3 × 107 organisms and 105 generations, we find the extinction transition in time-independent environments to be in the well-known directed percolation universality class. In contrast, temporal disorder leads to a highly unusual extinction transition characterized by logarithmically slow population decay and enormous fluctuations even for large populations. The simulations provide strong evidence for this transition to be of exotic infinite-noise type, as recently predicted by a renormalization group theory. The transition is accompanied by temporal Griffiths phases featuring a power-law dependence of the life time on the population size.
Theory of time-averaged neutral dynamics with environmental stochasticity
NASA Astrophysics Data System (ADS)
Danino, Matan; Shnerb, Nadav M.
2018-04-01
Competition is the main driver of population dynamics, which shapes the genetic composition of populations and the assembly of ecological communities. Neutral models assume that all the individuals are equivalent and that the dynamics is governed by demographic (shot) noise, with a steady state species abundance distribution (SAD) that reflects a mutation-extinction equilibrium. Recently, many empirical and theoretical studies emphasized the importance of environmental variations that affect coherently the relative fitness of entire populations. Here we consider two generic time-averaged neutral models; in both the relative fitness of each species fluctuates independently in time but its mean is zero. The first (model A) describes a system with local competition and linear fitness dependence of the birth-death rates, while in the second (model B) the competition is global and the fitness dependence is nonlinear. Due to this nonlinearity, model B admits a noise-induced stabilization mechanism that facilitates the invasion of new mutants. A self-consistent mean-field approach is used to reduce the multispecies problem to two-species dynamics, and the large-N asymptotics of the emerging set of Fokker-Planck equations is presented and solved. Our analytic expressions are shown to fit the SADs obtained from extensive Monte Carlo simulations and from numerical solutions of the corresponding master equations.
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
Atomic scale modeling of defect production and microstructure evolution in irradiated metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz de la Rubia, T.; Soneda, N.; Shimomura, Y.
1997-04-01
Irradiation effects in materials depend in a complex way on the form of the as-produced primary damage state and its spatial and temporal evolution. Thus, while collision cascades produce defects on a time scale of tens of picosecond, diffusion occurs over much longer time scales, of the order of seconds, and microstructure evolution over even longer time scales. In this report the authors present work aimed at describing damage production and evolution in metals across all the relevant time and length scales. They discuss results of molecular dynamics simulations of displacement cascades in Fe and V. They show that interstitialmore » clusters are produced in cascades above 5 keV, but not vacancy clusters. Next, they discuss the development of a kinetic Monte Carlo model that enables calculations of damage evolution over much longer time scales (1000`s of s) than the picosecond lifetime of the cascade. They demonstrate the applicability of the method by presenting predictions on the fraction of freely migrating defects in {alpha}Fe during irradiation at 600 K.« less
Path integral Monte Carlo ground state approach: formalism, implementation, and applications
NASA Astrophysics Data System (ADS)
Yan, Yangqian; Blume, D.
2017-11-01
Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.
Hoefling, Martin; Lima, Nicola; Haenni, Dominik; Seidel, Claus A. M.; Schuler, Benjamin; Grubmüller, Helmut
2011-01-01
Förster Resonance Energy Transfer (FRET) experiments probe molecular distances via distance dependent energy transfer from an excited donor dye to an acceptor dye. Single molecule experiments not only probe average distances, but also distance distributions or even fluctuations, and thus provide a powerful tool to study biomolecular structure and dynamics. However, the measured energy transfer efficiency depends not only on the distance between the dyes, but also on their mutual orientation, which is typically inaccessible to experiments. Thus, assumptions on the orientation distributions and averages are usually made, limiting the accuracy of the distance distributions extracted from FRET experiments. Here, we demonstrate that by combining single molecule FRET experiments with the mutual dye orientation statistics obtained from Molecular Dynamics (MD) simulations, improved estimates of distances and distributions are obtained. From the simulated time-dependent mutual orientations, FRET efficiencies are calculated and the full statistics of individual photon absorption, energy transfer, and photon emission events is obtained from subsequent Monte Carlo (MC) simulations of the FRET kinetics. All recorded emission events are collected to bursts from which efficiency distributions are calculated in close resemblance to the actual FRET experiment, taking shot noise fully into account. Using polyproline chains with attached Alexa 488 and Alexa 594 dyes as a test system, we demonstrate the feasibility of this approach by direct comparison to experimental data. We identified cis-isomers and different static local environments as sources of the experimentally observed heterogeneity. Reconstructions of distance distributions from experimental data at different levels of theory demonstrate how the respective underlying assumptions and approximations affect the obtained accuracy. Our results show that dye fluctuations obtained from MD simulations, combined with MC single photon kinetics, provide a versatile tool to improve the accuracy of distance distributions that can be extracted from measured single molecule FRET efficiencies. PMID:21629703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakalyar, D; Feng, W; McKenney, S
Purpose: The radiation dose absorbed at a particular radius ρ within the central plane of a long cylinder following a CT scan is a function of the length of the scan L and the cylinder radius R along with kVp and cylinder composition. An analytic function was created that that not only expresses these dependencies but is integrable in closed form over the area of the central plane. This feature facilitates explicit calculation of the planar average dose. The “approach to equilibrium” h(L) discussed in the TG111 report is seamlessly included in this function. Methods: For a cylindrically symmetric radiationmore » field, Monte Carlo calculations were performed to compute the dose distribution to long polyethylene cylinders for scans of varying L for cylinders ranging in radius from 5 to 20 cm. The function was developed from the resultant Monte Carlo data. In addition, the function was successfully fit to data taken from measurements on the 30 cm diameter ICRU/TG200 phantom using a real-time dosimeter. Results: Symmetry and continuity dictate a local extremum at the center which is a minimum for the larger sizes. There are competing effects as the beam penetrates the cylinder from the outside: attenuation, resulting in a decrease; scatter, abruptly increasing at the circumference. This competition may result in an absolute maximum between the center and outer edge leading to a “gull wing” shape for the radial dependence. For the smallest cylinders, scatter may dominate to the extent that there is an absolute maximum at the center. Conclusion: An integrable, analytic function has been developed that provides the radial dependency of dose for the central plane of a scan of length L for cylinders of varying diameter. Equivalently, we have developed h(L,R,ρ).« less
NASA Astrophysics Data System (ADS)
Spezi, Emiliano; Leal, Antonio
2013-04-01
The Third European Workshop on Monte Carlo Treatment Planning (MCTP2012) was held from 15-18 May, 2012 in Seville, Spain. The event was organized by the Universidad de Sevilla with the support of the European Workgroup on Monte Carlo Treatment Planning (EWG-MCTP). MCTP2012 followed two successful meetings, one held in Ghent (Belgium) in 2006 (Reynaert 2007) and one in Cardiff (UK) in 2009 (Spezi 2010). The recurrence of these workshops together with successful events held in parallel by McGill University in Montreal (Seuntjens et al 2012), show consolidated interest from the scientific community in Monte Carlo (MC) treatment planning. The workshop was attended by a total of 90 participants, mainly coming from a medical physics background. A total of 48 oral presentations and 15 posters were delivered in specific scientific sessions including dosimetry, code development, imaging, modelling of photon and electron radiation transport, external beam radiation therapy, nuclear medicine, brachitherapy and hadrontherapy. A copy of the programme is available on the workshop's website (www.mctp2012.com). In this special section of Physics in Medicine and Biology we report six papers that were selected following the journal's rigorous peer review procedure. These papers actually provide a good cross section of the areas of application of MC in treatment planning that were discussed at MCTP2012. Czarnecki and Zink (2013) and Wagner et al (2013) present the results of their work in small field dosimetry. Czarnecki and Zink (2013) studied field size and detector dependent correction factors for diodes and ion chambers within a clinical 6MV photon beam generated by a Siemens linear accelerator. Their modelling work based on the BEAMnrc/EGSnrc codes and experimental measurements revealed that unshielded diodes were the best choice for small field dosimetry because of their independence from the electron beam spot size and correction factor close to unity. Wagner et al (2013) investigated the recombination effect on liquid ionization chambers for stereotactic radiotherapy, a field of increasing importance in external beam radiotherapy. They modelled both radiation source (Cyberknife unit) and detector with the BEAMnrc/EGSnrc codes and quantified the dependence of the response of this type of detectors on factors such as the volume effect and the electrode. They also recommended that these dependences be accounted for in measurements involving small fields. In the field of external beam radiotherapy, Chakarova et al (2013) showed how total body irradiation (TBI) could be improved by simulating patient treatments with MC. In particular, BEAMnrc/EGSnrc based simulations highlighted the importance of optimizing individual compensators for TBI treatments. In the same area of application, Mairani et al (2013) reported on a new tool for treatment planning in proton therapy based on the FLUKA MC code. The software, used to model both proton therapy beam and patient anatomy, supports single-field and multiple-field optimization and can be used to optimize physical and relative biological effectiveness (RBE)-weighted dose distribution, using both constant and variable RBE models. In the field of nuclear medicine Marcatili et al (2013) presented RAYDOSE, a Geant4-based code specifically developed for applications in molecular radiotherapy (MRT). RAYDOSE has been designed to work in MRT trials using sequential positron emission tomography (PET) or single-photon emission tomography (SPECT) imaging to model patient specific time-dependent metabolic uptake and to calculate the total 3D dose distribution. The code was validated through experimental measurements in homogeneous and heterogeneous phantoms. Finally, in the field of code development Miras et al (2013) reported on CloudMC, a Windows Azure-based application for the parallelization of MC calculations in a dynamic cluster environment. Although the performance of CloudMC has been tested with the PENELOPE MC code, the authors report that software has been designed in a way that it should be independent of the type of MC code, provided that simulation meets a number of operational criteria. We wish to thank Elekta/CMS Inc., the University of Seville, the Junta of Andalusia and the European Regional Development Fund for their financial support. We would like also to acknowledge the members of EWG-MCTP for their help in peer-reviewing all the abstracts, and all the invited speakers who kindly agreed to deliver keynote presentations in their area of expertise. A final word of thanks to our colleagues who worked on the reviewing process of the papers selected for this special section and to the IOP Publishing staff who made it possible. MCTP2012 was accredited by the European Federation of Organisations for Medical Physics as a CPD event for medical physicists. Emiliano Spezi and Antonio Leal Guest Editors References Chakarova R, Müntzing K, Krantz M, E Hedin E and Hertzman S 2013 Monte Carlo optimization of total body irradiation in a phantom and patient geometry Phys. Med. Biol. 58 2461-69 Czarnecki D and Zink K 2013 Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields Phys. Med. Biol. 58 2431-44 Mairani A, Böhlen T T, Schiavi A, Tessonnier T, Molinelli S, Brons S, Battistoni G, Parodi K and Patera V 2013 A Monte Carlo-based treatment planning tool for proton therapy Phys. Med. Biol. 58 2471-90 Marcatili S, Pettinato C, Daniels S, Lewis G, Edwards P, Fanti S and Spezi E 2013 Development and validation of RAYDOSE: a Geant4 based application for molecular radiotherapy Phys. Med. Biol. 58 2491-508 Miras H, Jiménez R, Miras C and Gomà C 2013 CloudMC: A cloud computing application for Monte Carlo simulation Phys. Med. Biol. 58 N125-33 Reynaert N 2007 First European Workshop on Monte Carlo Treatment Planning J. Phys.: Conf. Ser. 74 011001 Seuntjens J, Beaulieu L, El Naqa I and Després P 2012 Special section: Selected papers from the Fourth International Workshop on Recent Advances in Monte Carlo Techniques for Radiation Therapy Phys. Med. Biol. 57 (11) E01 Spezi E 2010 Special section: Selected papers from the Second European Workshop on Monte Carlo Treatment Planning (MCTP2009) Phys. Med. Biol. 55 (16) E01 Wagner A, Crop F, Lacornerie T, Vandevelde F and Reynaert N 2013 Use of a liquid ionization chamber for stereotactic radiotherapy dosimetry Phys. Med. Biol. 58 2445-59
The diffusion of a Ga atom on GaAs(001)β2(2 × 4): Local superbasin kinetic Monte Carlo
NASA Astrophysics Data System (ADS)
Lin, Yangzheng; Fichthorn, Kristen A.
2017-10-01
We use first-principles density-functional theory to characterize the binding sites and diffusion mechanisms for a Ga adatom on the GaAs(001)β 2(2 × 4) surface. Diffusion in this system is a complex process involving eleven unique binding sites and sixteen different hops between neighboring binding sites. Among the binding sites, we can identify four different superbasins such that the motion between binding sites within a superbasin is much faster than hops exiting the superbasin. To describe diffusion, we use a recently developed local superbasin kinetic Monte Carlo (LSKMC) method, which accelerates a conventional kinetic Monte Carlo (KMC) simulation by describing the superbasins as absorbing Markov chains. We find that LSKMC is up to 4300 times faster than KMC for the conditions probed in this study. We characterize the distribution of exit times from the superbasins and find that these are sometimes, but not always, exponential and we characterize the conditions under which the superbasin exit-time distribution should be exponential. We demonstrate that LSKMC simulations assuming an exponential superbasin exit-time distribution yield the same diffusion coefficients as conventional KMC.
Ponomarev, Artem L; George, Kerry; Cucinotta, Francis A
2014-03-01
We have developed a model that can simulate the yield of radiation-induced chromosomal aberrations (CAs) and unrejoined chromosome breaks in normal and repair-deficient cells. The model predicts the kinetics of chromosomal aberration formation after exposure in the G₀/G₁ phase of the cell cycle to either low- or high-LET radiation. A previously formulated model based on a stochastic Monte Carlo approach was updated to consider the time dependence of DNA double-strand break (DSB) repair (proper or improper), and different cell types were assigned different kinetics of DSB repair. The distribution of the DSB free ends was derived from a mechanistic model that takes into account the structure of chromatin and DSB clustering from high-LET radiation. The kinetics of chromosomal aberration formation were derived from experimental data on DSB repair kinetics in normal and repair-deficient cell lines. We assessed different types of chromosomal aberrations with the focus on simple and complex exchanges, and predicted the DSB rejoining kinetics and misrepair probabilities for different cell types. The results identify major cell-dependent factors, such as a greater yield of chromosome misrepair in ataxia telangiectasia (AT) cells and slower rejoining in Nijmegen (NBS) cells relative to the wild-type. The model's predictions suggest that two mechanisms could exist for the inefficiency of DSB repair in AT and NBS cells, one that depends on the overall speed of joining (either proper or improper) of DNA broken ends, and another that depends on geometric factors, such as the Euclidian distance between DNA broken ends, which influences the relative frequency of misrepair.
Responses of many-species predator-prey systems to perturbations
NASA Astrophysics Data System (ADS)
Esmaily, Shadi; Pleimling, Michel
2015-03-01
We study the responses of many-species predator-prey systems, both in the well-mixed case as well as on a two-dimensional lattice, to permanent and transient perturbations. In the case of a weak transient perturbation the system returns to the original steady state, whereas a permanent perturbation pushes the system into a new steady state. Using Monte Carlo simulations, we monitor the approach to stationarity after a perturbation through a variety of quantities, as for example time-dependent particle densities and correlation functions. Different types of perturbations are studied, ranging from a change in reaction rates to the injection of additional individuals into the system, the latter perturbation mimicking immigration. This work is supported by the US National Science Foundation through Grant DMR-1205309.
Effects of changing canopy directional reflectance on feature selection
NASA Technical Reports Server (NTRS)
Smith, J. A.; Oliver, R. E.; Kilpela, O. E.
1973-01-01
The use of a Monte Carlo model for generating sample directional reflectance data for two simplified target canopies at two different solar positions is reported. Successive iterations through the model permit the calculation of a mean vector and covariance matrix for canopy reflectance for varied sensor view angles. These data may then be used to calculate the divergence between the target distributions for various wavelength combinations and for these view angles. Results of a feature selection analysis indicate that different sets of wavelengths are optimum for target discrimination depending on sensor view angle and that the targets may be more easily discriminated for some scan angles than others. The time-varying behavior of these results is also pointed out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestermanov, K.E.; Vasiliev, A.N; /Serpukhov, IHEP
2005-12-01
A precise measurement of the angle {alpha} in the CKM triangle is very important for a complete test of Standard Model. A theoretically clean method to extract {alpha} is provided by B{sup 0} {yields} {rho}{pi} decays. Monte Carlo simulations to obtain the BTeV reconstruction efficiency and to estimate the signal to background ratio for these decays were performed. Finally the time-dependent Dalitz plot analysis, using the isospin amplitude formalism for tre and penguin contributions, was carried out. It was shown that in one year of data taking BTeV could achieve an accuracy on {alpha} better than 5{sup o}.
Nature's style: Naturally trendy
Cohn, T.A.; Lins, H.F.
2005-01-01
Hydroclimatological time series often exhibit trends. While trend magnitude can be determined with little ambiguity, the corresponding statistical significance, sometimes cited to bolster scientific and political argument, is less certain because significance depends critically on the null hypothesis which in turn reflects subjective notions about what one expects to see. We consider statistical trend tests of hydroclimatological data in the presence of long-term persistence (LTP). Monte Carlo experiments employing FARIMA models indicate that trend tests which fail to consider LTP greatly overstate the statistical significance of observed trends when LTP is present. A new test is presented that avoids this problem. From a practical standpoint, however, it may be preferable to acknowledge that the concept of statistical significance is meaningless when discussing poorly understood systems.