Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piao, J; PLA 302 Hospital, Beijing; Xu, S
2016-06-15
Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
Data decomposition of Monte Carlo particle transport simulations via tally servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
RNA folding kinetics using Monte Carlo and Gillespie algorithms.
Clote, Peter; Bayegan, Amir H
2018-04-01
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .
Loading relativistic Maxwell distributions in particle simulations
NASA Astrophysics Data System (ADS)
Zenitani, S.
2015-12-01
In order to study energetic plasma phenomena by using particle-in-cell (PIC) and Monte-Carlo simulations, we need to deal with relativistic velocity distributions in these simulations. However, numerical algorithms to deal with relativistic distributions are not well known. In this contribution, we overview basic algorithms to load relativistic Maxwell distributions in PIC and Monte-Carlo simulations. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are newly proposed in a physically transparent manner. Their acceptance efficiencies are 50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Cell-veto Monte Carlo algorithm for long-range systems.
Kapfer, Sebastian C; Krauth, Werner
2016-09-01
We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.
Monte Carlo Simulations of Radiative and Neutrino Transport under Astrophysical Conditions
NASA Astrophysics Data System (ADS)
Krivosheyev, Yu. M.; Bisnovatyi-Kogan, G. S.
2018-05-01
Monte Carlo simulations are utilized to model radiative and neutrino transfer in astrophysics. An algorithm that can be used to study radiative transport in astrophysical plasma based on simulations of photon trajectories in a medium is described. Formation of the hard X-ray spectrum of the Galactic microquasar SS 433 is considered in detail as an example. Specific requirements for applying such simulations to neutrino transport in a densemedium and algorithmic differences compared to its application to photon transport are discussed.
NASA Astrophysics Data System (ADS)
Yeh, Peter C. Y.; Lee, C. C.; Chao, T. C.; Tung, C. J.
2017-11-01
Intensity-modulated radiation therapy is an effective treatment modality for the nasopharyngeal carcinoma. One important aspect of this cancer treatment is the need to have an accurate dose algorithm dealing with the complex air/bone/tissue interface in the head-neck region to achieve the cure without radiation-induced toxicities. The Acuros XB algorithm explicitly solves the linear Boltzmann transport equation in voxelized volumes to account for the tissue heterogeneities such as lungs, bone, air, and soft tissues in the treatment field receiving radiotherapy. With the single beam setup in phantoms, this algorithm has already been demonstrated to achieve the comparable accuracy with Monte Carlo simulations. In the present study, five nasopharyngeal carcinoma patients treated with the intensity-modulated radiation therapy were examined for their dose distributions calculated using the Acuros XB in the planning target volume and the organ-at-risk. Corresponding results of Monte Carlo simulations were computed from the electronic portal image data and the BEAMnrc/DOSXYZnrc code. Analysis of dose distributions in terms of the clinical indices indicated that the Acuros XB was in comparable accuracy with Monte Carlo simulations and better than the anisotropic analytical algorithm for dose calculations in real patients.
Scalable Domain Decomposed Monte Carlo Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Self-Learning Monte Carlo Method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians
NASA Astrophysics Data System (ADS)
Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan
2018-02-01
Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.
Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.
Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark
2010-05-01
We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.
Percentage depth dose evaluation in heterogeneous media using thermoluminescent dosimetry
da Rosa, L.A.R.; Campos, L.T.; Alves, V.G.L.; Batista, D.V.S.; Facure, A.
2010-01-01
The purpose of this study is to investigate the influence of lung heterogeneity inside a soft tissue phantom on percentage depth dose (PDD). PDD curves were obtained experimentally using LiF:Mg,Ti (TLD‐100) thermoluminescent detectors and applying Eclipse treatment planning system algorithms Batho, modified Batho (M‐Batho or BMod), equivalent TAR (E‐TAR or EQTAR), and anisotropic analytical algorithm (AAA) for a 15 MV photon beam and field sizes of 1×1,2×2,5×5, and 10×10cm2. Monte Carlo simulations were performed using the DOSRZnrc user code of EGSnrc. The experimental results agree with Monte Carlo simulations for all irradiation field sizes. Comparisons with Monte Carlo calculations show that the AAA algorithm provides the best simulations of PDD curves for all field sizes investigated. However, even this algorithm cannot accurately predict PDD values in the lung for field sizes of 1×1 and 2×2cm2. An overdosage in the lung of about 40% and 20% is calculated by the AAA algorithm close to the interface soft tissue/lung for 1×1 and 2×2cm2 field sizes, respectively. It was demonstrated that differences of 100% between Monte Carlo results and the algorithms Batho, modified Batho, and equivalent TAR responses may exist inside the lung region for the 1×1cm2 field. PACS number: 87.55.kd
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less
Testing trivializing maps in the Hybrid Monte Carlo algorithm
Engel, Georg P.; Schaefer, Stefan
2011-01-01
We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Direct simulation Monte Carlo method for the Uehling-Uhlenbeck-Boltzmann equation.
Garcia, Alejandro L; Wagner, Wolfgang
2003-11-01
In this paper we describe a direct simulation Monte Carlo algorithm for the Uehling-Uhlenbeck-Boltzmann equation in terms of Markov processes. This provides a unifying framework for both the classical Boltzmann case as well as the Fermi-Dirac and Bose-Einstein cases. We establish the foundation of the algorithm by demonstrating its link to the kinetic equation. By numerical experiments we study its sensitivity to the number of simulation particles and to the discretization of the velocity space, when approximating the steady-state distribution.
A novel Monte Carlo algorithm for simulating crystals with McStas
NASA Astrophysics Data System (ADS)
Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.
2004-07-01
We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
Monte Carlo simulations in X-ray imaging
NASA Astrophysics Data System (ADS)
Giersch, Jürgen; Durst, Jürgen
2008-06-01
Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
The Linked Neighbour List (LNL) method for fast off-lattice Monte Carlo simulations of fluids
NASA Astrophysics Data System (ADS)
Mazzeo, M. D.; Ricci, M.; Zannoni, C.
2010-03-01
We present a new algorithm, called linked neighbour list (LNL), useful to substantially speed up off-lattice Monte Carlo simulations of fluids by avoiding the computation of the molecular energy before every attempted move. We introduce a few variants of the LNL method targeted to minimise memory footprint or augment memory coherence and cache utilisation. Additionally, we present a few algorithms which drastically accelerate neighbour finding. We test our methods on the simulation of a dense off-lattice Gay-Berne fluid subjected to periodic boundary conditions observing a speedup factor of about 2.5 with respect to a well-coded implementation based on a conventional link-cell. We provide several implementation details of the different key data structures and algorithms used in this work.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations
Radak, Brian K.; Roux, Benoît
2016-10-07
Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less
An improved target velocity sampling algorithm for free gas elastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Walsh, Jonathan A.
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
An improved target velocity sampling algorithm for free gas elastic scattering
Romano, Paul K.; Walsh, Jonathan A.
2018-02-03
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms
NASA Astrophysics Data System (ADS)
Lopez, Nicolas
This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.
Event-chain Monte Carlo algorithms for three- and many-particle interactions
NASA Astrophysics Data System (ADS)
Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.
2017-02-01
We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
Simulation of Nuclear Reactor Kinetics by the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gomin, E. A.; Davidenko, V. D.; Zinchenko, A. S.; Kharchenko, I. K.
2017-12-01
The KIR computer code intended for calculations of nuclear reactor kinetics using the Monte Carlo method is described. The algorithm implemented in the code is described in detail. Some results of test calculations are given.
NASA Astrophysics Data System (ADS)
Dyer, Oliver T.; Ball, Robin C.
2017-03-01
We develop a new algorithm for the Brownian dynamics of soft matter systems that evolves time by spatially correlated Monte Carlo moves. The algorithm uses vector wavelets as its basic moves and produces hydrodynamics in the low Reynolds number regime propagated according to the Oseen tensor. When small moves are removed, the correlations closely approximate the Rotne-Prager tensor, itself widely used to correct for deficiencies in Oseen. We also include plane wave moves to provide the longest range correlations, which we detail for both infinite and periodic systems. The computational cost of the algorithm scales competitively with the number of particles simulated, N, scaling as N In N in homogeneous systems and as N in dilute systems. In comparisons to established lattice Boltzmann and Brownian dynamics algorithms, the wavelet method was found to be only a factor of order 1 times more expensive than the cheaper lattice Boltzmann algorithm in marginally semi-dilute simulations, while it is significantly faster than both algorithms at large N in dilute simulations. We also validate the algorithm by checking that it reproduces the correct dynamics and equilibrium properties of simple single polymer systems, as well as verifying the effect of periodicity on the mobility tensor.
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
Numerical heating in Particle-In-Cell simulations with Monte Carlo binary collisions
NASA Astrophysics Data System (ADS)
Alves, E. Paulo; Mori, Warren; Fiuza, Frederico
2017-10-01
The binary Monte Carlo collision (BMCC) algorithm is a robust and popular method to include Coulomb collision effects in Particle-in-Cell (PIC) simulations of plasmas. While a number of works have focused on extending the validity of the model to different physical regimes of temperature and density, little attention has been given to the fundamental coupling between PIC and BMCC algorithms. Here, we show that the coupling between PIC and BMCC algorithms can give rise to (nonphysical) numerical heating of the system, that can be far greater than that observed when these algorithms operate independently. This deleterious numerical heating effect can significantly impact the evolution of the simulated system particularly for long simulation times. In this work, we describe the source of this numerical heating, and derive scaling laws for the numerical heating rates based on the numerical parameters of PIC-BMCC simulations. We compare our theoretical scalings with PIC-BMCC numerical experiments, and discuss strategies to minimize this parasitic effect. This work is supported by DOE FES under FWP 100237 and 100182.
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
Vectorization of a Monte Carlo simulation scheme for nonequilibrium gas dynamics
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
Significant improvement has been obtained in the numerical performance of a Monte Carlo scheme for the analysis of nonequilibrium gas dynamics through an implementation of the algorithm which takes advantage of vector hardware, as presently demonstrated through application to three different problems. These are (1) a 1D standing-shock wave; (2) the flow of an expanding gas through an axisymmetric nozzle; and (3) the hypersonic flow of Ar gas over a 3D wedge. Problem (3) is illustrative of the greatly increased number of molecules which the simulation may involve, thanks to improved algorithm performance.
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030; Ji, Weixiao
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm,more » which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.« less
A point kernel algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan
2017-11-01
Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.
Quantum Monte Carlo Simulation of Frustrated Kondo Lattice Models
NASA Astrophysics Data System (ADS)
Sato, Toshihiro; Assaad, Fakher F.; Grover, Tarun
2018-03-01
The absence of the negative sign problem in quantum Monte Carlo simulations of spin and fermion systems has different origins. World-line based algorithms for spins require positivity of matrix elements whereas auxiliary field approaches for fermions depend on symmetries such as particle-hole symmetry. For negative-sign-free spin and fermionic systems, we show that one can formulate a negative-sign-free auxiliary field quantum Monte Carlo algorithm that allows Kondo coupling of fermions with the spins. Using this general approach, we study a half-filled Kondo lattice model on the honeycomb lattice with geometric frustration. In addition to the conventional Kondo insulator and antiferromagnetically ordered phases, we find a partial Kondo screened state where spins are selectively screened so as to alleviate frustration, and the lattice rotation symmetry is broken nematically.
Methods for Monte Carlo simulations of biomacromolecules
Vitalis, Andreas; Pappu, Rohit V.
2010-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.
Simulation and performance of an artificial retina for 40 MHz track reconstruction
Abba, A.; Bedeschi, F.; Citterio, M.; ...
2015-03-05
We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.
Loading relativistic Maxwell distributions in particle simulations
NASA Astrophysics Data System (ADS)
Zenitani, Seiji
2015-04-01
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ≈50 % for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
2015-01-01
Procedure. The simulated annealing (SA) algorithm is a well-known local search metaheuristic used to address discrete, continuous, and multiobjective...design of experiments (DOE) to tune the parameters of the optimiza- tion algorithm . Section 5 shows the results of the case study. Finally, concluding... metaheuristic . The proposed method is broken down into two phases. Phase I consists of a Monte Carlo simulation to obtain the simulated percentage of failure
Monte Carlo tests of the ELIPGRID-PC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Probabilistic analysis algorithm for UA slope software program.
DOT National Transportation Integrated Search
2013-12-01
A reliability-based computational algorithm for using a single row and equally spaced drilled shafts to : stabilize an unstable slope has been developed in this research. The Monte-Carlo simulation (MCS) : technique was used in the previously develop...
Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2014-03-01
The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.
Split Orthogonal Group: A Guiding Principle for Sign-Problem-Free Fermionic Simulations
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Iazzi, Mauro; Troyer, Matthias; Harcos, Gergely
2015-12-01
We present a guiding principle for designing fermionic Hamiltonians and quantum Monte Carlo (QMC) methods that are free from the infamous sign problem by exploiting the Lie groups and Lie algebras that appear naturally in the Monte Carlo weight of fermionic QMC simulations. Specifically, rigorous mathematical constraints on the determinants involving matrices that lie in the split orthogonal group provide a guideline for sign-free simulations of fermionic models on bipartite lattices. This guiding principle not only unifies the recent solutions of the sign problem based on the continuous-time quantum Monte Carlo methods and the Majorana representation, but also suggests new efficient algorithms to simulate physical systems that were previously prohibitive because of the sign problem.
The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Chen, Jundong
2018-03-01
Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.
Kawrakow, I
2000-03-01
In this report the condensed history Monte Carlo simulation of electron transport and its application to the calculation of ion chamber response is discussed. It is shown that the strong step-size dependencies and lack of convergence to the correct answer previously observed are the combined effect of the following artifacts caused by the EGS4/PRESTA implementation of the condensed history technique: dose underprediction due to PRESTA'S pathlength correction and lateral correlation algorithm; dose overprediction due to the boundary crossing algorithm; dose overprediction due to the breakdown of the fictitious cross section method for sampling distances between discrete interaction and the inaccurate evaluation of energy-dependent quantities. These artifacts are now understood quantitatively and analytical expressions for their effect are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-07
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
A Comparison of Experimental EPMA Data and Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2004-01-01
Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.
Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data
NASA Technical Reports Server (NTRS)
Carpenter, P. K.
2005-01-01
Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.
Annealed Importance Sampling Reversible Jump MCMC algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappingsmore » underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.« less
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.
NASA Astrophysics Data System (ADS)
Clowes, P.; Mccallum, S.; Welch, A.
2006-10-01
We are currently developing a multilayer avalanche photodiode (APD)-based detector for use in positron emission tomography (PET), which utilizes thin continuous crystals. In this paper, we developed a Monte Carlo-based simulation to aid in the design of such detectors. We measured the performance of a detector comprising a single thin continuous crystal (3.1 mm times 9.5 mm times 9.5 mm) of lutetium yttrium ortho-silicate (LYSO) and an APD array (4times4) elements; each element 1.6 mm2 and on a 2.3 mm pitch. We showed that a spatial resolution of better than 2.12 mm is achievable throughout the crystal provided that we adopt a Statistics Based Positioning (SBP) Algorithm. We then used Monte Carlo simulation to model the behavior of the detector. The accuracy of the Monte Carlo simulation was verified by comparing measured and simulated parent datasets (PDS) for the SBP algorithm. These datasets consisted of data for point sources at 49 positions uniformly distributed over the detector area. We also calculated the noise in the detector circuit and verified this value by measurement. The noise value was included in the simulation. We show that the performance of the simulation closely matches the measured performance. The simulations were extended to investigate the effect of different noise levels on positioning accuracy. This paper showed that if modest improvements could be made in the circuit noise then positioning accuracy would be greatly improved. In summary, we have developed a model that can be used to simulate the performance of a variety of APD-based continuous crystal PET detectors
Visual improvement for bad handwriting based on Monte-Carlo method
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua
2014-03-01
A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.
An efficient Cellular Potts Model algorithm that forbids cell fragmentation
NASA Astrophysics Data System (ADS)
Durand, Marc; Guesnet, Etienne
2016-11-01
The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.
Yuan, J; Moses, G A; McKenty, P W
2005-10-01
A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.
An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.
1998-01-01
An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.
Virtual Network Embedding via Monte Carlo Tree Search.
Haeri, Soroush; Trajkovic, Ljiljana
2018-02-01
Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.
Self-learning Monte Carlo method and cumulative update in fermion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Junwei; Shen, Huitao; Qi, Yang
2017-06-07
In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less
Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems
NASA Astrophysics Data System (ADS)
Nieciąg, Halina
2015-10-01
Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
NASA Technical Reports Server (NTRS)
Barkai, D.; Moriarty, K. J. M.; Rebbi, C.
1984-01-01
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Salvio, A.; Bedwani, S.; Carrier, J-F.
2014-08-15
Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization frommore » single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.« less
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Monte Carlo calculation of large and small-angle electron scattering in air
NASA Astrophysics Data System (ADS)
Cohen, B. I.; Higginson, D. P.; Eng, C. D.; Farmer, W. A.; Friedman, A.; Grote, D. P.; Larson, D. J.
2017-11-01
A Monte Carlo method for angle scattering of electrons in air that accommodates the small-angle multiple scattering and larger-angle single scattering limits is introduced. The algorithm is designed for use in a particle-in-cell simulation of electron transport and electromagnetic wave effects in air. The method is illustrated in example calculations.
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
LLNL Mercury Project Trinity Open Science Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brantley, Patrick; Dawson, Shawn; McKinley, Scott
2016-04-20
The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
Selected-node stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Duso, Lorenzo; Zechner, Christoph
2018-04-01
Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
USDA-ARS?s Scientific Manuscript database
In this research, the inverse algorithm for estimating optical properties of food and biological materials from spatially-resolved diffuse reflectance was optimized in terms of data smoothing, normalization and spatial region of reflectance profile for curve fitting. Monte Carlo simulation was used ...
Computer Simulation of Electron Positron Annihilation Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, y
2003-10-02
With the launching of the Next Linear Collider coming closer and closer, there is a pressing need for physicists to develop a fully-integrated computer simulation of e{sup +}e{sup -} annihilation process at center-of-mass energy of 1TeV. A simulation program acts as the template for future experiments. Either new physics will be discovered, or current theoretical uncertainties will shrink due to more accurate higher-order radiative correction calculations. The existence of an efficient and accurate simulation will help us understand the new data and validate (or veto) some of the theoretical models developed to explain new physics. It should handle well interfacesmore » between different sectors of physics, e.g., interactions happening at parton levels well above the QCD scale which are described by perturbative QCD, and interactions happening at much lower energy scale, which combine partons into hadrons. Also it should achieve competitive speed in real time when the complexity of the simulation increases. This thesis contributes some tools that will be useful for the development of such simulation programs. We begin our study by the development of a new Monte Carlo algorithm intended to perform efficiently in selecting weight-1 events when multiple parameter dimensions are strongly correlated. The algorithm first seeks to model the peaks of the distribution by features, adapting these features to the function using the EM algorithm. The representation of the distribution provided by these features is then improved using the VEGAS algorithm for the Monte Carlo integration. The two strategies mesh neatly into an effective multi-channel adaptive representation. We then present a new algorithm for the simulation of parton shower processes in high energy QCD. We want to find an algorithm which is free of negative weights, produces its output as a set of exclusive events, and whose total rate exactly matches the full Feynman amplitude calculation. Our strategy is to create the whole QCD shower as a tree structure generated by a multiple Poisson process. Working with the whole shower allows us to include correlations between gluon emissions from different sources. QCD destructive interference is controlled by the implementation of ''angular-ordering,'' as in the HERWIG Monte Carlo program. We discuss methods for systematic improvement of the approach to include higher order QCD effects.« less
Monte Carlo errors with less errors
NASA Astrophysics Data System (ADS)
Wolff, Ulli; Alpha Collaboration
2004-01-01
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
NASA Astrophysics Data System (ADS)
Schwarz, Karsten; Rieger, Heiko
2013-03-01
We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
Coupled particle-in-cell and Monte Carlo transport modeling of intense radiographic sources
NASA Astrophysics Data System (ADS)
Rose, D. V.; Welch, D. R.; Oliver, B. V.; Clark, R. E.; Johnson, D. L.; Maenchen, J. E.; Menge, P. R.; Olson, C. L.; Rovang, D. C.
2002-03-01
Dose-rate calculations for intense electron-beam diodes using particle-in-cell (PIC) simulations along with Monte Carlo electron/photon transport calculations are presented. The electromagnetic PIC simulations are used to model the dynamic operation of the rod-pinch and immersed-B diodes. These simulations include algorithms for tracking electron scattering and energy loss in dense materials. The positions and momenta of photons created in these materials are recorded and separate Monte Carlo calculations are used to transport the photons to determine the dose in far-field detectors. These combined calculations are used to determine radiographer equations (dose scaling as a function of diode current and voltage) that are compared directly with measured dose rates obtained on the SABRE generator at Sandia National Laboratories.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Monte Carlo calculation of large and small-angle electron scattering in air
Cohen, B. I.; Higginson, D. P.; Eng, C. D.; ...
2017-08-12
A Monte Carlo method for angle scattering of electrons in air that accommodates the small-angle multiple scattering and larger-angle single scattering limits is introduced. In this work, the algorithm is designed for use in a particle-in-cell simulation of electron transport and electromagnetic wave effects in air. The method is illustrated in example calculations.
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...
2017-06-09
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less
NASA Astrophysics Data System (ADS)
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya
2017-10-01
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.
NASA Astrophysics Data System (ADS)
Komura, Yukihiro; Okabe, Yutaka
2016-04-01
We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.
Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling
NASA Astrophysics Data System (ADS)
Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.
2018-02-01
A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.
Self-learning Monte Carlo with deep neural networks
NASA Astrophysics Data System (ADS)
Shen, Huitao; Liu, Junwei; Fu, Liang
2018-05-01
The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.
Monte Carlo sampling in diffusive dynamical systems
NASA Astrophysics Data System (ADS)
Tapias, Diego; Sanders, David P.; Altmann, Eduardo G.
2018-05-01
We introduce a Monte Carlo algorithm to efficiently compute transport properties of chaotic dynamical systems. Our method exploits the importance sampling technique that favors trajectories in the tail of the distribution of displacements, where deviations from a diffusive process are most prominent. We search for initial conditions using a proposal that correlates states in the Markov chain constructed via a Metropolis-Hastings algorithm. We show that our method outperforms the direct sampling method and also Metropolis-Hastings methods with alternative proposals. We test our general method through numerical simulations in 1D (box-map) and 2D (Lorentz gas) systems.
Multicanonical hybrid Monte Carlo algorithm: Boosting simulations of compact QED
NASA Astrophysics Data System (ADS)
Arnold, G.; Schilling, K.; Lippert, Th.
1999-03-01
We demonstrate that substantial progress can be achieved in the study of the phase structure of four-dimensional compact QED by a joint use of hybrid Monte Carlo and multicanonical algorithms through an efficient parallel implementation. This is borne out by the observation of considerable speedup of tunnelling between the metastable states, close to the phase transition, on the Wilson line. We estimate that the creation of adequate samples (with order 100 flip-flops) becomes a matter of half a year's run time at 2 Gflops sustained performance for lattices of size up to 244.
Using Computer-Based "Experiments" in the Analysis of Chemical Reaction Equilibria
ERIC Educational Resources Information Center
Li, Zhao; Corti, David S.
2018-01-01
The application of the Reaction Monte Carlo (RxMC) algorithm to standard textbook problems in chemical reaction equilibria is discussed. The RxMC method is a molecular simulation algorithm for studying the equilibrium properties of reactive systems, and therefore provides the opportunity to develop computer-based "experiments" for the…
“Skin-Core-Skin” Structure of Polymer Crystallization Investigated by Multiscale Simulation
Ruan, Chunlei
2018-01-01
“Skin-core-skin” structure is a typical crystal morphology in injection products. Previous numerical works have rarely focused on crystal evolution; rather, they have mostly been based on the prediction of temperature distribution or crystallization kinetics. The aim of this work was to achieve the “skin-core-skin” structure and investigate the role of external flow and temperature fields on crystal morphology. Therefore, the multiscale algorithm was extended to the simulation of polymer crystallization in a pipe flow. The multiscale algorithm contains two parts: a collocated finite volume method at the macroscopic level and a morphological Monte Carlo method at the microscopic level. The SIMPLE (semi-implicit method for pressure linked equations) algorithm was used to calculate the polymeric model at the macroscopic level, while the Monte Carlo method with stochastic birth-growth process of spherulites and shish-kebabs was used at the microscopic level. Results show that our algorithm is valid to predict “skin-core-skin” structure, and the initial melt temperature and the maximum velocity of melt at the inlet mainly affects the morphology of shish-kebabs. PMID:29659516
Fast and unbiased estimator of the time-dependent Hurst exponent.
Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria
2018-03-01
We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.
Fast and unbiased estimator of the time-dependent Hurst exponent
NASA Astrophysics Data System (ADS)
Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria
2018-03-01
We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.
Vega roll and attitude control system algorithms trade-off study
NASA Astrophysics Data System (ADS)
Paulino, N.; Cuciniello, G.; Cruciani, I.; Corraro, F.; Spallotta, D.; Nebula, F.
2013-12-01
This paper describes the trade-off study for the selection of the most suitable algorithms for the Roll and Attitude Control System (RACS) within the FPS-A program, aimed at developing the new Flight Program Software of VEGA Launcher. Two algorithms were analyzed: Switching Lines (SL) and Quaternion Feedback Regulation. Using a development simulation tool that models two critical flight phases (Long Coasting Phase (LCP) and Payload Release (PLR) Phase), both algorithms were assessed with Monte Carlo batch simulations for both of the phases. The statistical outcomes of the results demonstrate a 100 percent success rate for Quaternion Feedback Regulation, and support the choice of this method.
A brief history of the introduction of generalized ensembles to Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Berg, Bernd A.
2017-03-01
The most efficient weights for Markov chain Monte Carlo calculations of physical observables are not necessarily those of the canonical ensemble. Generalized ensembles, which do not exist in nature but can be simulated on computers, lead often to a much faster convergence. In particular, they have been used for simulations of first order phase transitions and for simulations of complex systems in which conflicting constraints lead to a rugged free energy landscape. Starting off with the Metropolis algorithm and Hastings' extension, I present a minireview which focuses on the explosive use of generalized ensembles in the early 1990s. Illustrations are given, which range from spin models to peptides.
Light reflection models for computer graphics.
Greenberg, D P
1989-04-14
During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Searching for efficient Markov chain Monte Carlo proposal kernels
Yang, Ziheng; Rodríguez, Carlos E.
2013-01-01
Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600
Distance between configurations in Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Fukuma, Masafumi; Matsumoto, Nobuyuki; Umeda, Naoya
2017-12-01
For a given Markov chain Monte Carlo algorithm we introduce a distance between two configurations that quantifies the difficulty of transition from one configuration to the other configuration. We argue that the distance takes a universal form for the class of algorithms which generate local moves in the configuration space. We explicitly calculate the distance for the Langevin algorithm, and show that it certainly has desired and expected properties as distance. We further show that the distance for a multimodal distribution gets dramatically reduced from a large value by the introduction of a tempering method. We also argue that, when the original distribution is highly multimodal with large number of degenerate vacua, an anti-de Sitter-like geometry naturally emerges in the extended configuration space.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
NASA Astrophysics Data System (ADS)
Prabhu Verleker, Akshay; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M.
2015-03-01
The purpose of this study is to develop an alternate empirical approach to estimate near-infra-red (NIR) photon propagation and quantify optically induced drug release in brain metastasis, without relying on computationally expensive Monte Carlo techniques (gold standard). Targeted drug delivery with optically induced drug release is a noninvasive means to treat cancers and metastasis. This study is part of a larger project to treat brain metastasis by delivering lapatinib-drug-nanocomplexes and activating NIR-induced drug release. The empirical model was developed using a weighted approach to estimate photon scattering in tissues and calibrated using a GPU based 3D Monte Carlo. The empirical model was developed and tested against Monte Carlo in optical brain phantoms for pencil beams (width 1mm) and broad beams (width 10mm). The empirical algorithm was tested against the Monte Carlo for different albedos along with diffusion equation and in simulated brain phantoms resembling white-matter (μs'=8.25mm-1, μa=0.005mm-1) and gray-matter (μs'=2.45mm-1, μa=0.035mm-1) at wavelength 800nm. The goodness of fit between the two models was determined using coefficient of determination (R-squared analysis). Preliminary results show the Empirical algorithm matches Monte Carlo simulated fluence over a wide range of albedo (0.7 to 0.99), while the diffusion equation fails for lower albedo. The photon fluence generated by empirical code matched the Monte Carlo in homogeneous phantoms (R2=0.99). While GPU based Monte Carlo achieved 300X acceleration compared to earlier CPU based models, the empirical code is 700X faster than the Monte Carlo for a typical super-Gaussian laser beam.
An Introduction to Computational Physics
NASA Astrophysics Data System (ADS)
Pang, Tao
2010-07-01
Preface to first edition; Preface; Acknowledgements; 1. Introduction; 2. Approximation of a function; 3. Numerical calculus; 4. Ordinary differential equations; 5. Numerical methods for matrices; 6. Spectral analysis; 7. Partial differential equations; 8. Molecular dynamics simulations; 9. Modeling continuous systems; 10. Monte Carlo simulations; 11. Genetic algorithm and programming; 12. Numerical renormalization; References; Index.
Waller, Niels G
2016-01-01
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.
A strategy for quantum algorithm design assisted by machine learning
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung
2014-07-01
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
CYBER 200 Applications Seminar
NASA Technical Reports Server (NTRS)
Gary, J. P. (Compiler)
1984-01-01
Applications suited for the CYBER 200 digital computer are discussed. Various areas of application including meteorology, algorithms, fluid dynamics, monte carlo methods, petroleum, electronic circuit simulation, biochemistry, lattice gauge theory, economics and ray tracing are discussed.
Efficient Monte Carlo Methods for Biomolecular Simulations.
NASA Astrophysics Data System (ADS)
Bouzida, Djamal
A new approach to efficient Monte Carlo simulations of biological molecules is presented. By relaxing the usual restriction to Markov processes, we are able to optimize performance while dealing directly with the inhomogeneity and anisotropy inherent in these systems. The advantage of this approach is that we can introduce a wide variety of Monte Carlo moves to deal with complicated motions of the molecule, while maintaining full optimization at every step. This enables the use of a variety of collective rotational moves that relax long-wavelength modes. We were able to show by explicit simulations that the resulting algorithms substantially increase the speed of the simulation while reproducing the correct equilibrium behavior. This approach is particularly intended for simulations of macromolecules, although we expect it to be useful in other situations. The dynamic optimization of the new Monte Carlo methods makes them very suitable for simulated annealing experiments on all systems whose state space is continuous in general, and to the protein folding problem in particular. We introduce an efficient annealing schedule using preferential bias moves. Our simulated annealing experiments yield structures whose free energies were lower than the equilibrated X-ray structure, which leads us to believe that the empirical energy function used does not fully represent the interatomic interactions. Furthermore, we believe that the largest discrepancies involve the solvent effects in particular.
NASA Astrophysics Data System (ADS)
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Dose specification for radiation therapy: dose to water or dose to medium?
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, Jinsheng
2011-05-01
The Monte Carlo method enables accurate dose calculation for radiation therapy treatment planning and has been implemented in some commercial treatment planning systems. Unlike conventional dose calculation algorithms that provide patient dose information in terms of dose to water with variable electron density, the Monte Carlo method calculates the energy deposition in different media and expresses dose to a medium. This paper discusses the differences in dose calculated using water with different electron densities and that calculated for different biological media and the clinical issues on dose specification including dose prescription and plan evaluation using dose to water and dose to medium. We will demonstrate that conventional photon dose calculation algorithms compute doses similar to those simulated by Monte Carlo using water with different electron densities, which are close (<4% differences) to doses to media but significantly different (up to 11%) from doses to water converted from doses to media following American Association of Physicists in Medicine (AAPM) Task Group 105 recommendations. Our results suggest that for consistency with previous radiation therapy experience Monte Carlo photon algorithms report dose to medium for radiotherapy dose prescription, treatment plan evaluation and treatment outcome analysis.
Macroion solutions in the cell model studied by field theory and Monte Carlo simulations.
Lue, Leo; Linse, Per
2011-12-14
Aqueous solutions of charged spherical macroions with variable dielectric permittivity and their associated counterions are examined within the cell model using a field theory and Monte Carlo simulations. The field theory is based on separation of fields into short- and long-wavelength terms, which are subjected to different statistical-mechanical treatments. The simulations were performed by using a new, accurate, and fast algorithm for numerical evaluation of the electrostatic polarization interaction. The field theory provides counterion distributions outside a macroion in good agreement with the simulation results over the full range from weak to strong electrostatic coupling. A low-dielectric macroion leads to a displacement of the counterions away from the macroion. © 2011 American Institute of Physics
García-Pareja, S; Galán, P; Manzano, F; Brualla, L; Lallena, A M
2010-07-01
In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within approximately 3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Wang, Chao; Guo, Xiao-Jing; Xu, Jin-Fang; Wu, Cheng; Sun, Ya-Lin; Ye, Xiao-Fei; Qian, Wei; Ma, Xiu-Qiang; Du, Wen-Min; He, Jia
2012-01-01
The detection of signals of adverse drug events (ADEs) has increased because of the use of data mining algorithms in spontaneous reporting systems (SRSs). However, different data mining algorithms have different traits and conditions for application. The objective of our study was to explore the application of association rule (AR) mining in ADE signal detection and to compare its performance with that of other algorithms. Monte Carlo simulation was applied to generate drug-ADE reports randomly according to the characteristics of SRS datasets. Thousand simulated datasets were mined by AR and other algorithms. On average, 108,337 reports were generated by the Monte Carlo simulation. Based on the predefined criterion that 10% of the drug-ADE combinations were true signals, with RR equaling to 10, 4.9, 1.5, and 1.2, AR detected, on average, 284 suspected associations with a minimum support of 3 and a minimum lift of 1.2. The area under the receiver operating characteristic (ROC) curve of the AR was 0.788, which was equivalent to that shown for other algorithms. Additionally, AR was applied to reports submitted to the Shanghai SRS in 2009. Five hundred seventy combinations were detected using AR from 24,297 SRS reports, and they were compared with recognized ADEs identified by clinical experts and various other sources. AR appears to be an effective method for ADE signal detection, both in simulated and real SRS datasets. The limitations of this method exposed in our study, i.e., a non-uniform thresholds setting and redundant rules, require further research.
Microstructure engineering of Pt-Al alloy thin films through Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Harris, R. A.; Terblans, J. J.; Swart, H. C.
2014-06-01
A kinetic algorithm, based on the regular solution model, was used in conjunction with the Monte Carlo method to simulate the evolution of a micro-scaled thin film system during exposure to a high temperature environment. Pt-Al thin films were prepared via electron beam physical vapor deposition (EB-PVD) with an atomic concentration ratio of Pt63:Al37. These films were heat treated at an annealing temperature of 400 °C for 16 and 49 minutes. Scanning Auger Microscopy (SAM) (PHI 700) was used to obtain elemental maps while sputtering through the thin films. Simulations were run for the same annealing temperatures and thin-film composition. From these simulations theoretical depth profiles and simulated microstructures were obtained. These were compared to the experimentally measured depth profiles and elemental maps.
An Introduction to Computational Physics - 2nd Edition
NASA Astrophysics Data System (ADS)
Pang, Tao
2006-01-01
Preface to first edition; Preface; Acknowledgements; 1. Introduction; 2. Approximation of a function; 3. Numerical calculus; 4. Ordinary differential equations; 5. Numerical methods for matrices; 6. Spectral analysis; 7. Partial differential equations; 8. Molecular dynamics simulations; 9. Modeling continuous systems; 10. Monte Carlo simulations; 11. Genetic algorithm and programming; 12. Numerical renormalization; References; Index.
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2018-03-01
The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.
MONTE CARLO SIMULATIONS OF PERIODIC PULSED REACTOR WITH MOVING GEOMETRY PARTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yan; Gohar, Yousry
2015-11-01
In a periodic pulsed reactor, the reactor state varies periodically from slightly subcritical to slightly prompt supercritical for producing periodic power pulses. Such periodic state change is accomplished by a periodic movement of specific reactor parts, such as control rods or reflector sections. The analysis of such reactor is difficult to perform with the current reactor physics computer programs. Based on past experience, the utilization of the point kinetics approximations gives considerable errors in predicting the magnitude and the shape of the power pulse if the reactor has significantly different neutron life times in different zones. To accurately simulate themore » dynamics of this type of reactor, a Monte Carlo procedure using the transfer function TRCL/TR of the MCNP/MCNPX computer programs is utilized to model the movable reactor parts. In this paper, two algorithms simulating the geometry part movements during a neutron history tracking have been developed. Several test cases have been developed to evaluate these procedures. The numerical test cases have shown that the developed algorithms can be utilized to simulate the reactor dynamics with movable geometry parts.« less
Stochastic series expansion simulation of the t -V model
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
Discrete Spin Vector Approach for Monte Carlo-based Magnetic Nanoparticle Simulations
NASA Astrophysics Data System (ADS)
Senkov, Alexander; Peralta, Juan; Sahay, Rahul
The study of magnetic nanoparticles has gained significant popularity due to the potential uses in many fields such as modern medicine, electronics, and engineering. To study the magnetic behavior of these particles in depth, it is important to be able to model and simulate their magnetic properties efficiently. Here we utilize the Metropolis-Hastings algorithm with a discrete spin vector model (in contrast to the standard continuous model) to model the magnetic hysteresis of a set of protected pure iron nanoparticles. We compare our simulations with the experimental hysteresis curves and discuss the efficiency of our algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
Genetic Algorithms and Their Application to the Protein Folding Problem
1993-12-01
and symbolic methods, random methods such as Monte Carlo simulation and simulated annealing, distance geometry, and molecular dynamics. Many of these...calculated energies with those obtained using the molecular simulation software package called CHARMm. 10 9) Test both the simple and parallel simpie genetic...homology-based, and simplification techniques. 3.21 Molecular Dynamics. Perhaps the most natural approach is to actually simulate the folding process. This
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1984-01-01
Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.
A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.
Liang, Faming; Kim, Jinsu; Song, Qifan
2016-01-01
Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.
A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data
Kim, Jinsu; Song, Qifan
2016-01-01
Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
A novel Kinetic Monte Carlo algorithm for Non-Equilibrium Simulations
NASA Astrophysics Data System (ADS)
Jha, Prateek; Kuzovkov, Vladimir; Grzybowski, Bartosz; Olvera de La Cruz, Monica
2012-02-01
We have developed an off-lattice kinetic Monte Carlo simulation scheme for reaction-diffusion problems in soft matter systems. The definition of transition probabilities in the Monte Carlo scheme are taken identical to the transition rates in a renormalized master equation of the diffusion process and match that of the Glauber dynamics of Ising model. Our scheme provides several advantages over the Brownian dynamics technique for non-equilibrium simulations. Since particle displacements are accepted/rejected in a Monte Carlo fashion as opposed to moving particles following a stochastic equation of motion, nonphysical movements (e.g., violation of a hard core assumption) are not possible (these moves have zero acceptance). Further, the absence of a stochastic ``noise'' term resolves the computational difficulties associated with generating statistically independent trajectories with definitive mean properties. Finally, since the timestep is independent of the magnitude of the interaction forces, much longer time-steps can be employed than Brownian dynamics. We discuss the applications of this scheme for dynamic self-assembly of photo-switchable nanoparticles and dynamical problems in polymeric systems.
Renner, Franziska
2016-09-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola
2017-04-01
In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.
A sampling algorithm for segregation analysis
Tier, Bruce; Henshall, John
2001-01-01
Methods for detecting Quantitative Trait Loci (QTL) without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC) method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated. PMID:11742631
A Monte Carlo model for 3D grain evolution during welding
NASA Astrophysics Data System (ADS)
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-09-01
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.
A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.
Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei
2014-12-16
The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately calculate the dose distribution in lung cancer and can provide a notably effective tool for benchmarking the performance of other dose calculation algorithms within patients.
MUSiC - A general search for deviations from monte carlo predictions in CMS
NASA Astrophysics Data System (ADS)
Biallass, Philipp A.; CMS Collaboration
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS
NASA Astrophysics Data System (ADS)
Hof, Carsten
2009-05-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
Rényi information flow in the Ising model with single-spin dynamics.
Deng, Zehui; Wu, Jinshan; Guo, Wenan
2014-12-01
The n-index Rényi mutual information and transfer entropies for the two-dimensional kinetic Ising model with arbitrary single-spin dynamics in the thermodynamic limit are derived as functions of ensemble averages of observables and spin-flip probabilities. Cluster Monte Carlo algorithms with different dynamics from the single-spin dynamics are thus applicable to estimate the transfer entropies. By means of Monte Carlo simulations with the Wolff algorithm, we calculate the information flows in the Ising model with the Metropolis dynamics and the Glauber dynamics, respectively. We find that not only the global Rényi transfer entropy, but also the pairwise Rényi transfer entropy, peaks in the disorder phase.
Derenzo, Stephen E
2017-01-01
This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values. PMID:28327464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derenzo, Stephen E.
Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less
Derenzo, Stephen E.
2017-04-11
Here, this paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Anothermore » Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm 3 scintillators: Lu 2SiO 5 :Ce,Ca (LSO), LaBr 3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr 3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. Lastly, the examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.« less
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger
2008-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael
2007-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
Improved cache performance in Monte Carlo transport calculations using energy banding
NASA Astrophysics Data System (ADS)
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
Peter, Emanuel K; Pivkin, Igor V; Shea, Joan-Emma
2015-04-14
In Monte-Carlo simulations of protein folding, pathways and folding times depend on the appropriate choice of the Monte-Carlo move or process path. We developed a generalized set of process paths for a hybrid kinetic Monte Carlo-Molecular dynamics algorithm, which makes use of a novel constant time-update and allows formation of α-helical and β-stranded secondary structures. We apply our new algorithm to the folding of 3 different proteins: TrpCage, GB1, and TrpZip4. All three systems are seen to fold within the range of the experimental folding times. For the β-hairpins, we observe that loop formation is the rate-determining process followed by collapse and formation of the native core. Cluster analysis of both peptides reveals that GB1 folds with equal likelihood along a zipper or a hydrophobic collapse mechanism, while TrpZip4 follows primarily a zipper pathway. The difference observed in the folding behavior of the two proteins can be attributed to the different arrangements of their hydrophobic core, strongly packed, and dry in case of TrpZip4, and partially hydrated in the case of GB1.
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
NVIDIA OptiX ray-tracing engine as a new tool for modelling medical imaging systems
NASA Astrophysics Data System (ADS)
Pietrzak, Jakub; Kacperski, Krzysztof; Cieślar, Marek
2015-03-01
The most accurate technique to model the X- and gamma radiation path through a numerically defined object is the Monte Carlo simulation which follows single photons according to their interaction probabilities. A simplified and much faster approach, which just integrates total interaction probabilities along selected paths, is known as ray tracing. Both techniques are used in medical imaging for simulating real imaging systems and as projectors required in iterative tomographic reconstruction algorithms. These approaches are ready for massive parallel implementation e.g. on Graphics Processing Units (GPU), which can greatly accelerate the computation time at a relatively low cost. In this paper we describe the application of the NVIDIA OptiX ray-tracing engine, popular in professional graphics and rendering applications, as a new powerful tool for X- and gamma ray-tracing in medical imaging. It allows the implementation of a variety of physical interactions of rays with pixel-, mesh- or nurbs-based objects, and recording any required quantities, like path integrals, interaction sites, deposited energies, and others. Using the OptiX engine we have implemented a code for rapid Monte Carlo simulations of Single Photon Emission Computed Tomography (SPECT) imaging, as well as the ray-tracing projector, which can be used in reconstruction algorithms. The engine generates efficient, scalable and optimized GPU code, ready to run on multi GPU heterogeneous systems. We have compared the results our simulations with the GATE package. With the OptiX engine the computation time of a Monte Carlo simulation can be reduced from days to minutes.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Romano, Paul K.; Siegel, Andrew R.
2017-07-01
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudhyadhom, A; McGuinness, C; Descovich, M
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less
Alexiadis, Orestis; Daoulas, Kostas Ch; Mavrantzas, Vlasis G
2008-01-31
A new Monte Carlo algorithm is presented for the simulation of atomistically detailed alkanethiol self-assembled monolayers (R-SH) on a Au(111) surface. Built on a set of simpler but also more complex (sometimes nonphysical) moves, the new algorithm is capable of efficiently driving all alkanethiol molecules to the Au(111) surface, thereby leading to full surface coverage, irrespective of the initial setup of the system. This circumvents a significant limitation of previous methods in which the simulations typically started from optimally packed structures on the substrate close to thermal equilibrium. Further, by considering an extended ensemble of configurations each one of which corresponds to a different value of the sulfur-sulfur repulsive core potential, sigmass, and by allowing for configurations to swap between systems characterized by different sigmass values, the new algorithm can adequately simulate model R-SH/Au(111) systems for values of sigmass ranging from 4.25 A corresponding to the Hautman-Klein molecular model (J. Chem. Phys. 1989, 91, 4994; 1990, 93, 7483) to 4.97 A corresponding to the Siepmann-McDonald model (Langmuir 1993, 9, 2351), and practically any chain length. Detailed results are presented quantifying the efficiency and robustness of the new method. Representative simulation data for the dependence of the structural and conformational properties of the formed monolayer on the details of the employed molecular model are reported and discussed; an investigation of the variation of molecular organization and ordering on the Au(111) substrate for three CH3-(CH2)n-SH/Au(111) systems with n=9, 15, and 21 is also included.
Physics Computing '92: Proceedings of the 4th International Conference
NASA Astrophysics Data System (ADS)
de Groot, Robert A.; Nadrchal, Jaroslav
1993-04-01
The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants
Advancing X-ray scattering metrology using inverse genetic algorithms.
Hannon, Adam F; Sunday, Daniel F; Windover, Donald; Kline, R Joseph
2016-01-01
We compare the speed and effectiveness of two genetic optimization algorithms to the results of statistical sampling via a Markov chain Monte Carlo algorithm to find which is the most robust method for determining real space structure in periodic gratings measured using critical dimension small angle X-ray scattering. Both a covariance matrix adaptation evolutionary strategy and differential evolution algorithm are implemented and compared using various objective functions. The algorithms and objective functions are used to minimize differences between diffraction simulations and measured diffraction data. These simulations are parameterized with an electron density model known to roughly correspond to the real space structure of our nanogratings. The study shows that for X-ray scattering data, the covariance matrix adaptation coupled with a mean-absolute error log objective function is the most efficient combination of algorithm and goodness of fit criterion for finding structures with little foreknowledge about the underlying fine scale structure features of the nanograting.
The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE
NASA Astrophysics Data System (ADS)
Vandenbroucke, B.; Wood, K.
2018-04-01
We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.
OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu
The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.
A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales
NASA Astrophysics Data System (ADS)
Elliott, Frank W.; Majda, Andrew J.
1995-03-01
A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.
Developing a cosmic ray muon sampling capability for muon tomography and monitoring applications
NASA Astrophysics Data System (ADS)
Chatzidakis, S.; Chrysikopoulou, S.; Tsoukalas, L. H.
2015-12-01
In this study, a cosmic ray muon sampling capability using a phenomenological model that captures the main characteristics of the experimentally measured spectrum coupled with a set of statistical algorithms is developed. The "muon generator" produces muons with zenith angles in the range 0-90° and energies in the range 1-100 GeV and is suitable for Monte Carlo simulations with emphasis on muon tomographic and monitoring applications. The muon energy distribution is described by the Smith and Duller (1959) [35] phenomenological model. Statistical algorithms are then employed for generating random samples. The inverse transform provides a means to generate samples from the muon angular distribution, whereas the Acceptance-Rejection and Metropolis-Hastings algorithms are employed to provide the energy component. The predictions for muon energies 1-60 GeV and zenith angles 0-90° are validated with a series of actual spectrum measurements and with estimates from the software library CRY. The results confirm the validity of the phenomenological model and the applicability of the statistical algorithms to generate polyenergetic-polydirectional muons. The response of the algorithms and the impact of critical parameters on computation time and computed results were investigated. Final output from the proposed "muon generator" is a look-up table that contains the sampled muon angles and energies and can be easily integrated into Monte Carlo particle simulation codes such as Geant4 and MCNP.
NASA Astrophysics Data System (ADS)
Andersen, Mie; Plaisance, Craig P.; Reuter, Karsten
2017-10-01
First-principles screening studies aimed at predicting the catalytic activity of transition metal (TM) catalysts have traditionally been based on mean-field (MF) microkinetic models, which neglect the effect of spatial correlations in the adsorbate layer. Here we critically assess the accuracy of such models for the specific case of CO methanation over stepped metals by comparing to spatially resolved kinetic Monte Carlo (kMC) simulations. We find that the typical low diffusion barriers offered by metal surfaces can be significantly increased at step sites, which results in persisting correlations in the adsorbate layer. As a consequence, MF models may overestimate the catalytic activity of TM catalysts by several orders of magnitude. The potential higher accuracy of kMC models comes at a higher computational cost, which can be especially challenging for surface reactions on metals due to a large disparity in the time scales of different processes. In order to overcome this issue, we implement and test a recently developed algorithm for achieving temporal acceleration of kMC simulations. While the algorithm overall performs quite well, we identify some challenging cases which may lead to a breakdown of acceleration algorithms and discuss possible directions for future algorithm development.
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
Monte Carlo simulations of medical imaging modalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, G.P.
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less
A global reaction route mapping-based kinetic Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Mitchell, Izaac; Irle, Stephan; Page, Alister J.
2016-07-01
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.
A global reaction route mapping-based kinetic Monte Carlo algorithm.
Mitchell, Izaac; Irle, Stephan; Page, Alister J
2016-07-14
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.
NASA Astrophysics Data System (ADS)
Vatansever, Erol
2017-05-01
By means of Monte Carlo simulation method with Metropolis algorithm, we elucidate the thermal and magnetic phase transition behaviors of a ferrimagnetic core/shell nanocubic system driven by a time dependent magnetic field. The particle core is composed of ferromagnetic spins, and it is surrounded by an antiferromagnetic shell. At the interface of the core/shell particle, we use antiferromagnetic spin-spin coupling. We simulate the nanoparticle using classical Heisenberg spins. After a detailed analysis, our Monte Carlo simulation results suggest that present system exhibits unusual and interesting magnetic behaviors. For example, at the relatively lower temperature regions, an increment in the amplitude of the external field destroys the antiferromagnetism in the shell part of the nanoparticle, leading to a ground state with ferromagnetic character. Moreover, particular attention has been dedicated to the hysteresis behaviors of the system. For the first time, we show that frequency dispersions can be categorized into three groups for a fixed temperature for finite core/shell systems, as in the case of the conventional bulk systems under the influence of an oscillating magnetic field.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
NASA Astrophysics Data System (ADS)
Guerra, J. G.; Rubiano, J. G.; Winter, G.; Guerra, A. G.; Alonso, H.; Arnedo, M. A.; Tejera, A.; Martel, P.; Bolivar, J. P.
2017-06-01
In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
Collective translational and rotational Monte Carlo cluster move for general pairwise interaction
NASA Astrophysics Data System (ADS)
Růžička, Štěpán; Allen, Michael P.
2014-09-01
Virtual move Monte Carlo is a cluster algorithm which was originally developed for strongly attractive colloidal, molecular, or atomistic systems in order to both approximate the collective dynamics and avoid sampling of unphysical kinetic traps. In this paper, we present the algorithm in the form, which selects the moving cluster through a wider class of virtual states and which is applicable to general pairwise interactions, including hard-core repulsion. The newly proposed way of selecting the cluster increases the acceptance probability by up to several orders of magnitude, especially for rotational moves. The results have their applications in simulations of systems interacting via anisotropic potentials both to enhance the sampling of the phase space and to approximate the dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Böcklin, Christoph, E-mail: boecklic@ethz.ch; Baumann, Dirk; Fröhlich, Jürg
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithmmore » works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.« less
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
An adaptive bias - hybrid MD/kMC algorithm for protein folding and aggregation.
Peter, Emanuel K; Shea, Joan-Emma
2017-07-05
In this paper, we present a novel hybrid Molecular Dynamics/kinetic Monte Carlo (MD/kMC) algorithm and apply it to protein folding and aggregation in explicit solvent. The new algorithm uses a dynamical definition of biases throughout the MD component of the simulation, normalized in relation to the unbiased forces. The algorithm guarantees sampling of the underlying ensemble in dependency of one average linear coupling factor 〈α〉 τ . We test the validity of the kinetics in simulations of dialanine and compare dihedral transition kinetics with long-time MD-simulations. We find that for low 〈α〉 τ values, kinetics are in good quantitative agreement. In folding simulations of TrpCage and TrpZip4 in explicit solvent, we also find good quantitative agreement with experimental results and prior MD/kMC simulations. Finally, we apply our algorithm to study growth of the Alzheimer Amyloid Aβ 16-22 fibril by monomer addition. We observe two possible binding modes, one at the extremity of the fibril (elongation) and one on the surface of the fibril (lateral growth), on timescales ranging from ns to 8 μs.
Delving Into Dissipative Quantum Dynamics: From Approximate to Numerically Exact Approaches
NASA Astrophysics Data System (ADS)
Chen, Hsing-Ta
In this thesis, I explore dissipative quantum dynamics of several prototypical model systems via various approaches, ranging from approximate to numerically exact schemes. In particular, in the realm of the approximate I explore the accuracy of Pade-resummed master equations and the fewest switches surface hopping (FSSH) algorithm for the spin-boson model, and non-crossing approximations (NCA) for the Anderson-Holstein model. Next, I develop new and exact Monte Carlo approaches and test them on the spin-boson model. I propose well-defined criteria for assessing the accuracy of Pade-resummed quantum master equations, which correctly demarcate the regions of parameter space where the Pade approximation is reliable. I continue the investigation of spin-boson dynamics by benchmark comparisons of the semiclassical FSSH algorithm to exact dynamics over a wide range of parameters. Despite small deviations from golden-rule scaling in the Marcus regime, standard surface hopping algorithm is found to be accurate over a large portion of parameter space. The inclusion of decoherence corrections via the augmented FSSH algorithm improves the accuracy of dynamical behavior compared to exact simulations, but the effects are generally not dramatic for the cases I consider. Next, I introduce new methods for numerically exact real-time simulation based on real-time diagrammatic Quantum Monte Carlo (dQMC) and the inchworm algorithm. These methods optimally recycle Monte Carlo information from earlier times to greatly suppress the dynamical sign problem. In the context of the spin-boson model, I formulate the inchworm expansion in two distinct ways: the first with respect to an expansion in the system-bath coupling and the second as an expansion in the diabatic coupling. In addition, a cumulant version of the inchworm Monte Carlo method is motivated by the latter expansion, which allows for further suppression of the growth of the sign error. I provide a comprehensive comparison of the performance of the inchworm Monte Carlo algorithms to other exact methodologies as well as a discussion of the relative advantages and disadvantages of each. Finally, I investigate the dynamical interplay between the electron-electron interaction and the electron-phonon coupling within the Anderson-Holstein model via two complementary NCAs: the first is constructed around the weak-coupling limit and the second around the polaron limit. The influence of phonons on spectral and transport properties is explored in equilibrium, for non-equilibrium steady state and for transient dynamics after a quench. I find the two NCAs disagree in nontrivial ways, indicating that more reliable approaches to the problem are needed. The complementary frameworks used here pave the way for numerically exact methods based on inchworm dQMC algorithms capable of treating open systems simultaneously coupled to multiple fermionic and bosonic baths.
A Monte Carlo model for 3D grain evolution during welding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bezier curves, which allow formore » the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. Furthermore, the model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.« less
A Monte Carlo model for 3D grain evolution during welding
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-08-04
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bezier curves, which allow formore » the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. Furthermore, the model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.« less
NASA Astrophysics Data System (ADS)
Xiong, Chuan; Shi, Jiancheng
2014-01-01
To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing.
Takada, Kenta; Kumada, Hiroaki; Liem, Peng Hong; Sakurai, Hideyuki; Sakae, Takeji
2016-12-01
We simulated the effect of patient displacement on organ doses in boron neutron capture therapy (BNCT). In addition, we developed a faster calculation algorithm (NCT high-speed) to simulate irradiation more efficiently. We simulated dose evaluation for the standard irradiation position (reference position) using a head phantom. Cases were assumed where the patient body is shifted in lateral directions compared to the reference position, as well as in the direction away from the irradiation aperture. For three groups of neutron (thermal, epithermal, and fast), flux distribution using NCT high-speed with a voxelized homogeneous phantom was calculated. The three groups of neutron fluxes were calculated for the same conditions with Monte Carlo code. These calculated results were compared. In the evaluations of body movements, there were no significant differences even with shifting up to 9mm in the lateral directions. However, the dose decreased by about 10% with shifts of 9mm in a direction away from the irradiation aperture. When comparing both calculations in the phantom surface up to 3cm, the maximum differences between the fluxes calculated by NCT high-speed with those calculated by Monte Carlo code for thermal neutrons and epithermal neutrons were 10% and 18%, respectively. The time required for NCT high-speed code was about 1/10th compared to Monte Carlo calculation. In the evaluation, the longitudinal displacement has a considerable effect on the organ doses. We also achieved faster calculation of depth distribution of thermal neutron flux using NCT high-speed calculation code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A robust algorithm for automated target recognition using precomputed radar cross sections
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2004-09-01
Passive radar is an emerging technology that offers a number of unique benefits, including covert operation. Many such systems are already capable of detecting and tracking aircraft. The goal of this work is to develop a robust algorithm for adding automated target recognition (ATR) capabilities to existing passive radar systems. In previous papers, we proposed conducting ATR by comparing the precomputed RCS of known targets to that of detected targets. To make the precomputed RCS as accurate as possible, a coordinated flight model is used to estimate aircraft orientation. Once the aircraft's position and orientation are known, it is possible to determine the incident and observed angles on the aircraft, relative to the transmitter and receiver. This makes it possible to extract the appropriate radar cross section (RCS) from our simulated database. This RCS is then scaled to account for propagation losses and the receiver's antenna gain. A Rician likelihood model compares these expected signals from different targets to the received target profile. We have previously employed Monte Carlo runs to gauge the probability of error in the ATR algorithm; however, generation of a statistically significant set of Monte Carlo runs is computationally intensive. As an alternative to Monte Carlo runs, we derive the relative entropy (also known as Kullback-Liebler distance) between two Rician distributions. Since the probability of Type II error in our hypothesis testing problem can be expressed as a function of the relative entropy via Stein's Lemma, this provides us with a computationally efficient method for determining an upper bound on our algorithm's performance. It also provides great insight into the types of classification errors we can expect from our algorithm. This paper compares the numerically approximated probability of Type II error with the results obtained from a set of Monte Carlo runs.
A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.
Bartzsch, Stefan; Oelfke, Uwe
2013-11-01
The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.
Apparatus and method for tracking a molecule or particle in three dimensions
Werner, James H [Los Alamos, NM; Goodwin, Peter M [Los Alamos, NM; Lessard, Guillaume [Santa Fe, NM
2009-03-03
An apparatus and method were used to track the movement of fluorescent particles in three dimensions. Control software was used with the apparatus to implement a tracking algorithm for tracking the motion of the individual particles in glycerol/water mixtures. Monte Carlo simulations suggest that the tracking algorithms in combination with the apparatus may be used for tracking the motion of single fluorescent or fluorescently labeled biomolecules in three dimensions.
Monte Carlo simulations of precise timekeeping in the Milstar communication satellite system
NASA Technical Reports Server (NTRS)
Camparo, James C.; Frueholz, R. P.
1995-01-01
The Milstar communications satellite system will provide secure antijam communication capabilities for DOD operations into the next century. In order to accomplish this task, the Milstar system will employ precise timekeeping on its satellites and at its ground control stations. The constellation will consist of four satellites in geosynchronous orbit, each carrying a set of four rubidium (Rb) atomic clocks. Several times a day, during normal operation, the Mission Control Element (MCE) will collect timing information from the constellation, and after several days use this information to update the time and frequency of the satellite clocks. The MCE will maintain precise time with a cesium (Cs) atomic clock, synchronized to UTC(USNO) via a GPS receiver. We have developed a Monte Carlo simulation of Milstar's space segment timekeeping. The simulation includes the effects of: uplink/downlink time transfer noise; satellite crosslink time transfer noise; satellite diurnal temperature variations; satellite and ground station atomic clock noise; and also quantization limits regarding satellite time and frequency corrections. The Monte Carlo simulation capability has proven to be an invaluable tool in assessing the performance characteristics of various timekeeping algorithms proposed for Milstar, and also in highlighting the timekeeping capabilities of the system. Here, we provide a brief overview of the basic Milstar timekeeping architecture as it is presently envisioned. We then describe the Monte Carlo simulation of space segment timekeeping, and provide examples of the simulation's efficacy in resolving timekeeping issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Monte Carlo simulation of Ising models by multispin coding on a vector computer
NASA Astrophysics Data System (ADS)
Wansleben, Stephan; Zabolitzky, John G.; Kalle, Claus
1984-11-01
Rebbi's efficient multispin coding algorithm for Ising models is combined with the use of the vector computer CDC Cyber 205. A speed of 21.2 million updates per second is reached. This is comparable to that obtained by special- purpose computers.
Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm
NASA Astrophysics Data System (ADS)
Gubernatis, James
2014-03-01
A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.
The Wang-Landau Sampling Algorithm
NASA Astrophysics Data System (ADS)
Landau, David P.
2003-03-01
Over the past several decades Monte Carlo simulations[1] have evolved into a powerful tool for the study of wide-ranging problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, usually in the canonical ensemble, and enormous improvements have been made in performance through the implementation of novel algorithms. Nonetheless, difficulties arise near phase transitions, either due to critical slowing down near 2nd order transitions or to metastability near 1st order transitions, thus limiting the applicability of the method. We shall describe a new and different Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is estimated, all thermodynamic properties can be calculated at all temperatures. This approach can be extended to multi-dimensional parameter spaces and has already found use in classical models of interacting particles including systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc., as well as for quantum models. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
NASA Astrophysics Data System (ADS)
E Derenzo, Stephen
2017-05-01
This paper demonstrates through Monte Carlo simulations that a practical positron emission tomograph with (1) deep scintillators for efficient detection, (2) double-ended readout for depth-of-interaction information, (3) fixed-level analog triggering, and (4) accurate calibration and timing data corrections can achieve a coincidence resolving time (CRT) that is not far above the statistical lower bound. One Monte Carlo algorithm simulates a calibration procedure that uses data from a positron point source. Annihilation events with an interaction near the entrance surface of one scintillator are selected, and data from the two photodetectors on the other scintillator provide depth-dependent timing corrections. Another Monte Carlo algorithm simulates normal operation using these corrections and determines the CRT. A third Monte Carlo algorithm determines the CRT statistical lower bound by generating a series of random interaction depths, and for each interaction a set of random photoelectron times for each of the two photodetectors. The most likely interaction times are determined by shifting the depth-dependent probability density function to maximize the joint likelihood for all the photoelectron times in each set. Example calculations are tabulated for different numbers of photoelectrons and photodetector time jitters for three 3 × 3 × 30 mm3 scintillators: Lu2SiO5:Ce,Ca (LSO), LaBr3:Ce, and a hypothetical ultra-fast scintillator. To isolate the factors that depend on the scintillator length and the ability to estimate the DOI, CRT values are tabulated for perfect scintillator-photodetectors. For LSO with 4000 photoelectrons and single photoelectron time jitter of the photodetector J = 0.2 ns (FWHM), the CRT value using the statistically weighted average of corrected trigger times is 0.098 ns FWHM and the statistical lower bound is 0.091 ns FWHM. For LaBr3:Ce with 8000 photoelectrons and J = 0.2 ns FWHM, the CRT values are 0.070 and 0.063 ns FWHM, respectively. For the ultra-fast scintillator with 1 ns decay time, 4000 photoelectrons, and J = 0.2 ns FWHM, the CRT values are 0.021 and 0.017 ns FWHM, respectively. The examples also show that calibration and correction for depth-dependent variations in pulse height and in annihilation and optical photon transit times are necessary to achieve these CRT values.
NASA Astrophysics Data System (ADS)
Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.
2006-06-01
In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Understanding Quantum Tunneling through Quantum Monte Carlo Simulations.
Isakov, Sergei V; Mazzola, Guglielmo; Smelyanskiy, Vadim N; Jiang, Zhang; Boixo, Sergio; Neven, Hartmut; Troyer, Matthias
2016-10-28
The tunneling between the two ground states of an Ising ferromagnet is a typical example of many-body tunneling processes between two local minima, as they occur during quantum annealing. Performing quantum Monte Carlo (QMC) simulations we find that the QMC tunneling rate displays the same scaling with system size, as the rate of incoherent tunneling. The scaling in both cases is O(Δ^{2}), where Δ is the tunneling splitting (or equivalently the minimum spectral gap). An important consequence is that QMC simulations can be used to predict the performance of a quantum annealer for tunneling through a barrier. Furthermore, by using open instead of periodic boundary conditions in imaginary time, equivalent to a projector QMC algorithm, we obtain a quadratic speedup for QMC simulations, and achieve linear scaling in Δ. We provide a physical understanding of these results and their range of applicability based on an instanton picture.
Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach
NASA Astrophysics Data System (ADS)
Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne
We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.
NASA Astrophysics Data System (ADS)
Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.
2017-12-01
We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.
NASA Astrophysics Data System (ADS)
Foucart, Francois
2018-04-01
General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Y; Singh, H; Islam, M
2014-06-01
Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Barnes, M P; Ebert, M A
2008-03-01
The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions.
Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G
2007-05-24
The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.
Chodera, John D; Shirts, Michael R
2011-11-21
The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.
Diffusion-Based Model for Synaptic Molecular Communication Channel.
Khan, Tooba; Bilgin, Bilgesu A; Akan, Ozgur B
2017-06-01
Computational methods have been extensively used to understand the underlying dynamics of molecular communication methods employed by nature. One very effective and popular approach is to utilize a Monte Carlo simulation. Although it is very reliable, this method can have a very high computational cost, which in some cases renders the simulation impractical. Therefore, in this paper, for the special case of an excitatory synaptic molecular communication channel, we present a novel mathematical model for the diffusion and binding of neurotransmitters that takes into account the effects of synaptic geometry in 3-D space and re-absorption of neurotransmitters by the transmitting neuron. Based on this model we develop a fast deterministic algorithm, which calculates expected value of the output of this channel, namely, the amplitude of excitatory postsynaptic potential (EPSP), for given synaptic parameters. We validate our algorithm by a Monte Carlo simulation, which shows total agreement between the results of the two methods. Finally, we utilize our model to quantify the effects of variation in synaptic parameters, such as position of release site, receptor density, size of postsynaptic density, diffusion coefficient, uptake probability, and number of neurotransmitters in a vesicle, on maximum number of bound receptors that directly affect the peak amplitude of EPSP.
Loeffler, Troy David; Chan, Henry; Narayanan, Badri; Cherukara, Mathew J; Gray, Stephen K; Sankaranarayanan, Subramanian K R S
2018-06-20
Coarse-grained molecular dynamics (MD) simulations represent a powerful approach to simulate longer time scale and larger length scale phenomena than those accessible to all-atom models. The gain in efficiency, however, comes at the cost of atomistic details. The reverse transformation, also known as back-mapping, of coarse grained beads into their atomistic constituents represents a major challenge. Most existing approaches are limited to specific molecules or specific force-fields and often rely on running a long time atomistic MD of the back-mapped configuration to arrive at an optimal solution. Such approaches are problematic when dealing with systems with high diffusion barriers. Here, we introduce a new extension of the configurational-bias-Monte-Carlo (CBMC) algorithm, which we term the crystalline-configurational-bias-Monte-Carlo (C-CBMC) algortihm, that allows rapid and efficient conversion of a coarse-grained model back into its atomistic representation. Although the method is generic, we use a coarse-grained water model as a representative example and demonstrate the back-mapping or reverse transformation for model systems ranging from the ice-liquid water interface to amorphous and crystalline ice configurations. A series of simulations using the TIP4P/Ice model are performed to compare the new CBMC method to several other standard Monte Carlo and Molecular Dynamics based back-mapping techniques. In all the cases, the C-CBMC algorithm is able to find optimal hydrogen bonded configuration many thousand evaluations/steps sooner than the other methods compared within this paper. For crystalline ice structures such as a hexagonal, cubic, and cubic-hexagonal stacking disorder structures, the C-CBMC was able to find structures that were between 0.05 and 0.1 eV/water molecule lower in energy than the ground state energies predicted by the other methods. Detailed analysis of the atomistic structures show a significantly better global hydrogen positioning when contrasted with the existing simpler back-mapping methods. Our results demonstrate the efficiency and efficacy of our new back-mapping approach, especially for crystalline systems where simple force-field based relaxations have a tendency to get trapped in local minima.
Slice sampling technique in Bayesian extreme of gold price modelling
NASA Astrophysics Data System (ADS)
Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham
2013-09-01
In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.
Modeling of light distribution in the brain for topographical imaging
NASA Astrophysics Data System (ADS)
Okada, Eiji; Hayashi, Toshiyuki; Kawaguchi, Hiroshi
2004-07-01
Multi-channel optical imaging system can obtain a topographical distribution of the activated region in the brain cortex by a simple mapping algorithm. Near-infrared light is strongly scattered in the head and the volume of tissue that contributes to the change in the optical signal detected with source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. We report theoretical investigations on the spatial resolution of the topographic imaging of the brain activity. The head model for the theoretical study consists of five layers that imitate the scalp, skull, subarachnoid space, gray matter and white matter. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The source-detector pairs are one dimensionally arranged on the surface of the model and the distance between the adjoining source-detector pairs are varied from 4 mm to 32 mm. The change in detected intensity caused by the absorption change is obtained by Monte Carlo simulation. The position of absorption change is reconstructed by the conventional mapping algorithm and the reconstruction algorithm using the spatial sensitivity profiles. We discuss the effective interval between the source-detector pairs and the choice of reconstruction algorithms to improve the topographic images of brain activity.
CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert; Wood, Kenneth
2018-02-01
CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less
Simulation of Thermal Neutron Transport Processes Directly from the Evaluated Nuclear Data Files
NASA Astrophysics Data System (ADS)
Androsenko, P. A.; Malkov, M. R.
The main idea of the method proposed in this paper is to directly extract thetrequired information for Monte-Carlo calculations from nuclear data files. The met od being developed allows to directly utilize the data obtained from libraries and seehs to be the most accurate technique. Direct simulation of neutron scattering in themmal energy range using file 7 ENDF-6 format in terms of code system BRAND has beer achieved. Simulation algorithms have been verified using the criterion x2
Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani
2004-03-01
The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.
Ion Structure Near a Core-Shell Dielectric Nanoparticle
NASA Astrophysics Data System (ADS)
Ma, Manman; Gan, Zecheng; Xu, Zhenli
2017-02-01
A generalized image charge formulation is proposed for the Green's function of a core-shell dielectric nanoparticle for which theoretical and simulation investigations are rarely reported due to the difficulty of resolving the dielectric heterogeneity. Based on the formulation, an efficient and accurate algorithm is developed for calculating electrostatic polarization charges of mobile ions, allowing us to study related physical systems using the Monte Carlo algorithm. The computer simulations show that a fine-tuning of the shell thickness or the ion-interface correlation strength can greatly alter electric double-layer structures and capacitances, owing to the complicated interplay between dielectric boundary effects and ion-interface correlations.
NASA Astrophysics Data System (ADS)
Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.
2018-05-01
A hybrid Landau Lifshitz Gilbert/kinetic Monte Carlo algorithm is used to simulate experimental magnetic hysteresis loops for dual layer exchange coupled composite media. The calculation of the rate coefficients and difficulties arising from low energy barriers, a fundamental problem of the kinetic Monte Carlo method, are discussed and the methodology used to treat them in the present work is described. The results from simulations are compared with experimental vibrating sample magnetometer measurements on dual layer CoPtCrB/CoPtCrSiO media and a quantitative relationship between the thickness of the exchange control layer separating the layers and the effective exchange constant between the layers is obtained. Estimates of the energy barriers separating magnetically reversed states of the individual grains in zero applied field as well as the saturation field at sweep rates relevant to the bit write speeds in magnetic recording are also presented. The significance of this comparison between simulations and experiment and the estimates of the material parameters obtained from it are discussed in relation to optimizing the performance of magnetic storage media.
Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A
2011-11-01
Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
Miming the cancer-immune system competition by kinetic Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bianca, Carlo; Lemarchand, Annie
2016-10-01
In order to mimic the interactions between cancer and the immune system at cell scale, we propose a minimal model of cell interactions that is similar to a chemical mechanism including autocatalytic steps. The cells are supposed to bear a quantity called activity that may increase during the interactions. The fluctuations of cell activity are controlled by a so-called thermostat. We develop a kinetic Monte Carlo algorithm to simulate the cell interactions and thermalization of cell activity. The model is able to reproduce the well-known behavior of tumors treated by immunotherapy: the first apparent elimination of the tumor by the immune system is followed by a long equilibrium period and the final escape of cancer from immunosurveillance.
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
System identification and model reduction using modulating function techniques
NASA Technical Reports Server (NTRS)
Shen, Yan
1993-01-01
Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.
A global reaction route mapping-based kinetic Monte Carlo algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Izaac; Page, Alister J., E-mail: sirle@chem.nagoya-u.ac.jp, E-mail: alister.page@newcastle.edu.au; Irle, Stephan, E-mail: sirle@chem.nagoya-u.ac.jp, E-mail: alister.page@newcastle.edu.au
2016-07-14
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculatedmore » on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J; Owrangi, A; Jiang, R
2014-06-01
Purpose: This study investigated the performance of the anisotropic analytical algorithm (AAA) in dose calculation in radiotherapy concerning a small finger joint. Monte Carlo simulation (EGSnrc code) was used in this dosimetric evaluation. Methods: Heterogeneous finger joint phantom containing a vertical water layer (bone joint or cartilage) sandwiched by two bones with dimension 2 × 2 × 2 cm{sup 3} was irradiated by the 6 MV photon beams (field size = 4 × 4 cm{sup 2}). The central beam axis was along the length of the bone joint and the isocenter was set to the center of the joint. Themore » joint width and beam angle were varied from 0.5–2 mm and 0°–15°, respectively. Depth doses were calculated using the AAA and DOSXYZnrc. For dosimetric comparison and normalization, dose calculations were repeated in water phantom using the same beam geometry. Results: Our AAA and Monte Carlo results showed that the AAA underestimated the joint doses by 10%–20%, and could not predict joint dose variation with changes of joint width and beam angle. The calculated bone dose enhancement for the AAA was lower than Monte Carlo and the depth of maximum dose for the phantom was smaller than that for the water phantom. From Monte Carlo results, there was a decrease of joint dose as its width increased. This reflected the smaller the joint width, the more the bone scatter contributed to the depth dose. Moreover, the joint dose was found slightly decreased with an increase of beam angle. Conclusion: The AAA could not handle variations of joint dose well with changes of joint width and beam angle based on our finger joint phantom. Monte Carlo results showed that the joint dose decreased with increase of joint width and beam angle. This dosimetry comparison should be useful to radiation staff in radiotherapy related to small bone joint.« less
Mashouf, Shahram; Lechtman, Eli; Beaulieu, Luc; Verhaegen, Frank; Keller, Brian M; Ravi, Ananth; Pignol, Jean-Philippe
2013-09-21
The American Association of Physicists in Medicine Task Group No. 43 (AAPM TG-43) formalism is the standard for seeds brachytherapy dose calculation. But for breast seed implants, Monte Carlo simulations reveal large errors due to tissue heterogeneity. Since TG-43 includes several factors to account for source geometry, anisotropy and strength, we propose an additional correction factor, called the inhomogeneity correction factor (ICF), accounting for tissue heterogeneity for Pd-103 brachytherapy. This correction factor is calculated as a function of the media linear attenuation coefficient and mass energy absorption coefficient, and it is independent of the source internal structure. Ultimately the dose in heterogeneous media can be calculated as a product of dose in water as calculated by TG-43 protocol times the ICF. To validate the ICF methodology, dose absorbed in spherical phantoms with large tissue heterogeneities was compared using the TG-43 formalism corrected for heterogeneity versus Monte Carlo simulations. The agreement between Monte Carlo simulations and the ICF method remained within 5% in soft tissues up to several centimeters from a Pd-103 source. Compared to Monte Carlo, the ICF methods can easily be integrated into a clinical treatment planning system and it does not require the detailed internal structure of the source or the photon phase-space.
Superficial dose evaluation of four dose calculation algorithms
NASA Astrophysics Data System (ADS)
Cao, Ying; Yang, Xiaoyu; Yang, Zhen; Qiu, Xiaoping; Lv, Zhiping; Lei, Mingjun; Liu, Gui; Zhang, Zijian; Hu, Yongmei
2017-08-01
Accurate superficial dose calculation is of major importance because of the skin toxicity in radiotherapy, especially within the initial 2 mm depth being considered more clinically relevant. The aim of this study is to evaluate superficial dose calculation accuracy of four commonly used algorithms in commercially available treatment planning systems (TPS) by Monte Carlo (MC) simulation and film measurements. The superficial dose in a simple geometrical phantom with size of 30 cm×30 cm×30 cm was calculated by PBC (Pencil Beam Convolution), AAA (Analytical Anisotropic Algorithm), AXB (Acuros XB) in Eclipse system and CCC (Collapsed Cone Convolution) in Raystation system under the conditions of source to surface distance (SSD) of 100 cm and field size (FS) of 10×10 cm2. EGSnrc (BEAMnrc/DOSXYZnrc) program was performed to simulate the central axis dose distribution of Varian Trilogy accelerator, combined with measurements of superficial dose distribution by an extrapolation method of multilayer radiochromic films, to estimate the dose calculation accuracy of four algorithms in the superficial region which was recommended in detail by the ICRU (International Commission on Radiation Units and Measurement) and the ICRP (International Commission on Radiological Protection). In superficial region, good agreement was achieved between MC simulation and film extrapolation method, with the mean differences less than 1%, 2% and 5% for 0°, 30° and 60°, respectively. The relative skin dose errors were 0.84%, 1.88% and 3.90%; the mean dose discrepancies (0°, 30° and 60°) between each of four algorithms and MC simulation were (2.41±1.55%, 3.11±2.40%, and 1.53±1.05%), (3.09±3.00%, 3.10±3.01%, and 3.77±3.59%), (3.16±1.50%, 8.70±2.84%, and 18.20±4.10%) and (14.45±4.66%, 10.74±4.54%, and 3.34±3.26%) for AXB, CCC, AAA and PBC respectively. Monte Carlo simulation verified the feasibility of the superficial dose measurements by multilayer Gafchromic films. And the rank of superficial dose calculation accuracy of four algorithms was AXB>CCC>AAA>PBC. Care should be taken when using the AAA and PBC algorithms in the superficial dose calculation.
Chen, Yunjie; Roux, Benoît
2014-09-21
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
NASA Astrophysics Data System (ADS)
Chen, Yunjie; Roux, Benoît
2014-09-01
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
High-Throughput Characterization of Porous Materials Using Graphics Processing Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jihan; Martin, Richard L.; Rübel, Oliver
We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CHmore » $$_{4}$$ and CO$$_{2}$$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.« less
A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola
2018-04-01
This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.
On estimating the phase of a periodic waveform in additive Gaussian noise, part 3
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1991-01-01
Motivated by advances in signal processing technology that support more complex algorithms, researchers have taken a new look at the problem of estimating the phase and other parameters of a nearly periodic waveform in additive Gaussian noise, based on observation during a given time interval. Parts 1 and 2 are very briefly reviewed. In part 3, the actual performances of some of the highly nonlinear estimation algorithms of parts 1 and 2 are evaluated by numerical simulation using Monte Carlo techniques.
Determining the Complexity of the Quantum Adiabatic Algorithm using Quantum Monte Carlo Simulations
2012-12-18
of this printing. List the papers, including journal references, in the following categories: Received Paper 12/06/2012 4.00 Itay Hen, A. Young...PhysRevLett.104.020502 12/06/2012 3.00 A. P. Young, Itay Hen. Exponential complexity of the quantum adiabatic algorithm for certain satisfiability problems...Physical Review E, (12 2011): 0. doi: 10.1103/PhysRevE.84.061152 12/06/2012 5.00 Edward Farhi, David Gosset, Itay Hen, A. Sandvik, Peter Shor, A
Farace, Paolo; Righetto, Roberto; Deffet, Sylvain; Meijers, Arturs; Vander Stappen, Francois
2016-12-01
To introduce a fast ray-tracing algorithm in pencil proton radiography (PR) with a multilayer ionization chamber (MLIC) for in vivo range error mapping. Pencil beam PR was obtained by delivering spots uniformly positioned in a square (45 × 45 mm 2 field-of-view) of 9 × 9 spots capable of crossing the phantoms (210 MeV). The exit beam was collected by a MLIC to sample the integral depth dose (IDD MLIC ). PRs of an electron-density and of a head phantom were acquired by moving the couch to obtain multiple 45 × 45 mm 2 frames. To map the corresponding range errors, the two-dimensional set of IDD MLIC was compared with (i) the integral depth dose computed by the treatment planning system (TPS) by both analytic (IDD TPS ) and Monte Carlo (IDD MC ) algorithms in a volume of water simulating the MLIC at the CT, and (ii) the integral depth dose directly computed by a simple ray-tracing algorithm (IDD direct ) through the same CT data. The exact spatial position of the spot pattern was numerically adjusted testing different in-plane positions and selecting the one that minimized the range differences between IDD direct and IDD MLIC . Range error mapping was feasible by both the TPS and the ray-tracing methods, but very sensitive to even small misalignments. In homogeneous regions, the range errors computed by the direct ray-tracing algorithm matched the results obtained by both the analytic and the Monte Carlo algorithms. In both phantoms, lateral heterogeneities were better modeled by the ray-tracing and the Monte Carlo algorithms than by the analytic TPS computation. Accordingly, when the pencil beam crossed lateral heterogeneities, the range errors mapped by the direct algorithm matched better the Monte Carlo maps than those obtained by the analytic algorithm. Finally, the simplicity of the ray-tracing algorithm allowed to implement a prototype procedure for automated spatial alignment. The ray-tracing algorithm can reliably replace the TPS method in MLIC PR for in vivo range verification and it can be a key component to develop software tools for spatial alignment and correction of CT calibration.
Algebraic, geometric, and stochastic aspects of genetic operators
NASA Technical Reports Server (NTRS)
Foo, N. Y.; Bosworth, J. L.
1972-01-01
Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.
Effects of Missing Data Methods in Structural Equation Modeling with Nonnormal Longitudinal Data
ERIC Educational Resources Information Center
Shin, Tacksoo; Davison, Mark L.; Long, Jeffrey D.
2009-01-01
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a…
Accelerated Monte Carlo Methods for Coulomb Collisions
NASA Astrophysics Data System (ADS)
Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce
2014-03-01
We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.
Surface excitations in electron spectroscopy. Part I: dielectric formalism and Monte Carlo algorithm
Salvat-Pujol, F; Werner, W S M
2013-01-01
The theory describing energy losses of charged non-relativistic projectiles crossing a planar interface is derived on the basis of the Maxwell equations, outlining the physical assumptions of the model in great detail. The employed approach is very general in that various common models for surface excitations (such as the specular reflection model) can be obtained by an appropriate choice of parameter values. The dynamics of charged projectiles near surfaces is examined by calculations of the induced surface charge and the depth- and direction-dependent differential inelastic inverse mean free path (DIIMFP) and stopping power. The effect of several simplifications frequently encountered in the literature is investigated: differences of up to 100% are found in heights, widths, and positions of peaks in the DIIMFP. The presented model is implemented in a Monte Carlo algorithm for the simulation of the electron transport relevant for surface electron spectroscopy. Simulated reflection electron energy loss spectra are in good agreement with experiment on an absolute scale. Copyright © 2012 John Wiley & Sons, Ltd. PMID:23794766
Comparisons between MCNP, EGS4 and experiment for clinical electron beams.
Jeraj, R; Keall, P J; Ostwald, P M
1999-03-01
Understanding the limitations of Monte Carlo codes is essential in order to avoid systematic errors in simulations, and to suggest further improvement of the codes. MCNP and EGS4, Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth dose data and experimental backscatter results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth dose curves and electron backscattering factors. The default version of MCNP calculates electron depth dose curves which are too penetrating. The MCNP results agree better with experiment if the ITS-style energy-indexing algorithm is used. EGS4 underpredicts electron backscattering for high-Z materials. The results slightly improve if optimal PRESTA-I parameters are used. MCNP simulates backscattering well even for high-Z materials. To conclude the comparison, a timing study was performed. EGS4 is generally faster than MCNP and use of a large number of scoring voxels dramatically slows down the MCNP calculation. However, use of a large number of geometry voxels in MCNP only slightly affects the speed of the calculation.
Simulation Analysis of Computer-Controlled pressurization for Mixture Ratio Control
NASA Technical Reports Server (NTRS)
Alexander, Leslie A.; Bishop-Behel, Karen; Benfield, Michael P. J.; Kelley, Anthony; Woodcock, Gordon R.
2005-01-01
A procedural code (C++) simulation was developed to investigate potentials for mixture ratio control of pressure-fed spacecraft rocket propulsion systems by measuring propellant flows, tank liquid quantities, or both, and using feedback from these measurements to adjust propellant tank pressures to set the correct operating mixture ratio for minimum propellant residuals. The pressurization system eliminated mechanical regulators in favor of a computer-controlled, servo- driven throttling valve. We found that a quasi-steady state simulation (pressure and flow transients in the pressurization systems resulting from changes in flow control valve position are ignored) is adequate for this purpose. Monte-Carlo methods are used to obtain simulated statistics on propellant depletion. Mixture ratio control algorithms based on proportional-integral-differential (PID) controller methods were developed. These algorithms actually set target tank pressures; the tank pressures are controlled by another PID controller. Simulation indicates this approach can provide reductions in residual propellants.
NASA Astrophysics Data System (ADS)
Polyakov, Evgeny A.; Vorontsov-Velyaminov, Pavel N.
2014-08-01
Properties of ferrofluid bilayer (modeled as a system of two planar layers separated by a distance h and each layer carrying a soft sphere dipolar liquid) are calculated in the framework of inhomogeneous Ornstein-Zernike equations with reference hypernetted chain closure (RHNC). The bridge functions are taken from a soft sphere (1/r12) reference system in the pressure-consistent closure approximation. In order to make the RHNC problem tractable, the angular dependence of the correlation functions is expanded into special orthogonal polynomials according to Lado. The resulting equations are solved using the Newton-GRMES algorithm as implemented in the public-domain solver NITSOL. Orientational densities and pair distribution functions of dipoles are compared with Monte Carlo simulation results. A numerical algorithm for the Fourier-Hankel transform of any positive integer order on a uniform grid is presented.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Chetty, Indrin J; Curran, Bruce; Cygler, Joanna E; DeMarco, John J; Ezzell, Gary; Faddegon, Bruce A; Kawrakow, Iwan; Keall, Paul J; Liu, Helen; Ma, C M Charlie; Rogers, D W O; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V
2007-12-01
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.
NASA Astrophysics Data System (ADS)
Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.
2005-09-01
Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.
Molecular Dynamics, Monte Carlo Simulations, and Langevin Dynamics: A Computational Review
Paquet, Eric; Viktor, Herna L.
2015-01-01
Macromolecular structures, such as neuraminidases, hemagglutinins, and monoclonal antibodies, are not rigid entities. Rather, they are characterised by their flexibility, which is the result of the interaction and collective motion of their constituent atoms. This conformational diversity has a significant impact on their physicochemical and biological properties. Among these are their structural stability, the transport of ions through the M2 channel, drug resistance, macromolecular docking, binding energy, and rational epitope design. To assess these properties and to calculate the associated thermodynamical observables, the conformational space must be efficiently sampled and the dynamic of the constituent atoms must be simulated. This paper presents algorithms and techniques that address the abovementioned issues. To this end, a computational review of molecular dynamics, Monte Carlo simulations, Langevin dynamics, and free energy calculation is presented. The exposition is made from first principles to promote a better understanding of the potentialities, limitations, applications, and interrelations of these computational methods. PMID:25785262
Evaluation of Laser Based Alignment Algorithms Under Additive Random and Diffraction Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClay, W A; Awwal, A; Wilhelmsen, K
2004-09-30
The purpose of the automatic alignment algorithm at the National Ignition Facility (NIF) is to determine the position of a laser beam based on the position of beam features from video images. The position information obtained is used to command motors and attenuators to adjust the beam lines to the desired position, which facilitates the alignment of all 192 beams. One of the goals of the algorithm development effort is to ascertain the performance, reliability, and uncertainty of the position measurement. This paper describes a method of evaluating the performance of algorithms using Monte Carlo simulation. In particular we showmore » the application of this technique to the LM1{_}LM3 algorithm, which determines the position of a series of two beam light sources. The performance of the algorithm was evaluated for an ensemble of over 900 simulated images with varying image intensities and noise counts, as well as varying diffraction noise amplitude and frequency. The performance of the algorithm on the image data set had a tolerance well beneath the 0.5-pixel system requirement.« less
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P
2015-11-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
An improved random walk algorithm for the implicit Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keady, Kendra P., E-mail: keadyk@lanl.gov; Cleveland, Mathew A.
In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in “fully-gray” form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities aremore » a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2–4 compared to standard RW, and a factor of ∼3–6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.« less
Liu, Tao; Djordjevic, Ivan B
2014-12-29
In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.
Selection method of terrain matching area for TERCOM algorithm
NASA Astrophysics Data System (ADS)
Zhang, Qieqie; Zhao, Long
2017-10-01
The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.
2001-06-01
We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
Population Annealing Monte Carlo for Frustrated Systems
NASA Astrophysics Data System (ADS)
Amey, Christopher; Machta, Jonathan
Population annealing is a sequential Monte Carlo algorithm that efficiently simulates equilibrium systems with rough free energy landscapes such as spin glasses and glassy fluids. A large population of configurations is initially thermalized at high temperature and then cooled to low temperature according to an annealing schedule. The population is kept in thermal equilibrium at every annealing step via resampling configurations according to their Boltzmann weights. Population annealing is comparable to parallel tempering in terms of efficiency, but has several distinct and useful features. In this talk I will give an introduction to population annealing and present recent progress in understanding its equilibration properties and optimizing it for spin glasses. Results from large-scale population annealing simulations for the Ising spin glass in 3D and 4D will be presented. NSF Grant DMR-1507506.
NASA Astrophysics Data System (ADS)
Llovet, X.; Salvat, F.
2018-01-01
The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.
Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma
NASA Astrophysics Data System (ADS)
Seibert, Stanley; Latorre, Anthony
2012-03-01
We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.
Neural network simulation of the atmospheric point spread function for the adjacency effect research
NASA Astrophysics Data System (ADS)
Ma, Xiaoshan; Wang, Haidong; Li, Ligang; Yang, Zhen; Meng, Xin
2016-10-01
Adjacency effect could be regarded as the convolution of the atmospheric point spread function (PSF) and the surface leaving radiance. Monte Carlo is a common method to simulate the atmospheric PSF. But it can't obtain analytic expression and the meaningful results can be only acquired by statistical analysis of millions of data. A backward Monte Carlo algorithm was employed to simulate photon emitting and propagating in the atmosphere under different conditions. The PSF was determined by recording the photon-receiving numbers in fixed bin at different position. A multilayer feed-forward neural network with a single hidden layer was designed to learn the relationship between the PSF's and the input condition parameters. The neural network used the back-propagation learning rule for training. Its input parameters involved atmosphere condition, spectrum range, observing geometry. The outputs of the network were photon-receiving numbers in the corresponding bin. Because the output units were too many to be allowed by neural network, the large network was divided into a collection of smaller ones. These small networks could be ran simultaneously on many workstations and/or PCs to speed up the training. It is important to note that the simulated PSF's by Monte Carlo technique in non-nadir viewing angles are more complicated than that in nadir conditions which brings difficulties in the design of the neural network. The results obtained show that the neural network approach could be very useful to compute the atmospheric PSF based on the simulated data generated by Monte Carlo method.
Monte Carlo based, patient-specific RapidArc QA using Linac log files.
Teke, Tony; Bergman, Alanah M; Kwa, William; Gill, Bradford; Duzenli, Cheryl; Popescu, I Antoniu
2010-01-01
A Monte Carlo (MC) based QA process to validate the dynamic beam delivery accuracy for Varian RapidArc (Varian Medical Systems, Palo Alto, CA) using Linac delivery log files (DynaLog) is presented. Using DynaLog file analysis and MC simulations, the goal of this article is to (a) confirm that adequate sampling is used in the RapidArc optimization algorithm (177 static gantry angles) and (b) to assess the physical machine performance [gantry angle and monitor unit (MU) delivery accuracy]. Ten clinically acceptable RapidArc treatment plans were generated for various tumor sites and delivered to a water-equivalent cylindrical phantom on the treatment unit. Three Monte Carlo simulations were performed to calculate dose to the CT phantom image set: (a) One using a series of static gantry angles defined by 177 control points with treatment planning system (TPS) MLC control files (planning files), (b) one using continuous gantry rotation with TPS generated MLC control files, and (c) one using continuous gantry rotation with actual Linac delivery log files. Monte Carlo simulated dose distributions are compared to both ionization chamber point measurements and with RapidArc TPS calculated doses. The 3D dose distributions were compared using a 3D gamma-factor analysis, employing a 3%/3 mm distance-to-agreement criterion. The dose difference between MC simulations, TPS, and ionization chamber point measurements was less than 2.1%. For all plans, the MC calculated 3D dose distributions agreed well with the TPS calculated doses (gamma-factor values were less than 1 for more than 95% of the points considered). Machine performance QA was supplemented with an extensive DynaLog file analysis. A DynaLog file analysis showed that leaf position errors were less than 1 mm for 94% of the time and there were no leaf errors greater than 2.5 mm. The mean standard deviation in MU and gantry angle were 0.052 MU and 0.355 degrees, respectively, for the ten cases analyzed. The accuracy and flexibility of the Monte Carlo based RapidArc QA system were demonstrated. Good machine performance and accurate dose distribution delivery of RapidArc plans were observed. The sampling used in the TPS optimization algorithm was found to be adequate.
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.
2017-10-01
We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.
Assessing the performance of a covert automatic target recognition algorithm
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2005-05-01
Passive radar systems exploit illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. Doing so allows them to operate covertly and inexpensively. Our research seeks to enhance passive radar systems by adding automatic target recognition (ATR) capabilities. In previous papers we proposed conducting ATR by comparing the radar cross section (RCS) of aircraft detected by a passive radar system to the precomputed RCS of aircraft in the target class. To effectively model the low-frequency setting, the comparison is made via a Rician likelihood model. Monte Carlo simulations indicate that the approach is viable. This paper builds on that work by developing a method for quickly assessing the potential performance of the ATR algorithm without using exhaustive Monte Carlo trials. This method exploits the relation between the probability of error in a binary hypothesis test under the Bayesian framework to the Chernoff information. Since the data are well-modeled as Rician, we begin by deriving a closed-form approximation for the Chernoff information between two Rician densities. This leads to an approximation for the probability of error in the classification algorithm that is a function of the number of available measurements. We conclude with an application that would be particularly cumbersome to accomplish via Monte Carlo trials, but that can be quickly addressed using the Chernoff information approach. This application evaluates the length of time that an aircraft must be tracked before the probability of error in the ATR algorithm drops below a desired threshold.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuemann, J; Grassberger, C; Paganetti, H
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
Improved transition path sampling methods for simulation of rare events
NASA Astrophysics Data System (ADS)
Chopra, Manan; Malshe, Rohit; Reddy, Allam S.; de Pablo, J. J.
2008-04-01
The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
NASA Astrophysics Data System (ADS)
Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David
2015-07-01
Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
Million-body star cluster simulations: comparisons between Monte Carlo and direct N-body
NASA Astrophysics Data System (ADS)
Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer
2016-12-01
We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes `frozen' (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.
NASA Astrophysics Data System (ADS)
Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.
2015-06-01
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall accuracy error of less than 10% can now be used for further MC simulation applications such as development of hardware components as well as for testing of new PET/MR software algorithms, such as assessment of point-spread function-based reconstruction algorithms.
Monte Carlo simulation of a photodisintegration of 3 H experiment in Geant4
NASA Astrophysics Data System (ADS)
Gray, Isaiah
2013-10-01
An upcoming experiment involving photodisintegration of 3 H at the High Intensity Gamma-Ray Source facility at Duke University has been simulated in the software package Geant4. CAD models of silicon detectors and wire chambers were imported from Autodesk Inventor using the program FastRad and the Geant4 GDML importer. Sensitive detectors were associated with the appropriate logical volumes in the exported GDML file so that changes in detector geometry will be easily manifested in the simulation. Probability distribution functions for the energy and direction of outgoing protons were generated using numerical tables from previous theory, and energies and directions were sampled from these distributions using a rejection sampling algorithm. The simulation will be a useful tool to optimize detector geometry, estimate background rates, and test data analysis algorithms. This work was supported by the Triangle Universities Nuclear Laboratory REU program at Duke University.
LEGEND, a LEO-to-GEO Environment Debris Model
NASA Technical Reports Server (NTRS)
Liou, Jer Chyi; Hall, Doyle T.
2013-01-01
LEGEND (LEO-to-GEO Environment Debris model) is a three-dimensional orbital debris evolutionary model that is capable of simulating the historical and future debris populations in the near-Earth environment. The historical component in LEGEND adopts a deterministic approach to mimic the known historical populations. Launched rocket bodies, spacecraft, and mission-related debris (rings, bolts, etc.) are added to the simulated environment. Known historical breakup events are reproduced, and fragments down to 1 mm in size are created. The LEGEND future projection component adopts a Monte Carlo approach and uses an innovative pair-wise collision probability evaluation algorithm to simulate the future breakups and the growth of the debris populations. This algorithm is based on a new "random sampling in time" approach that preserves characteristics of the traditional approach and captures the rapidly changing nature of the orbital debris environment. LEGEND is a Fortran 90-based numerical simulation program. It operates in a UNIX/Linux environment.
A study of hydrogen diffusion flames using PDF turbulence model
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
The application of probability density function (pdf) turbulence models is addressed. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional computational fluid dynamic (CFD) flow solver with the Monte Carlo simulation of the pdf evolution equation was developed. The algorithm was validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames was carried out using this algorithm. Numerical results compared favorably with experimental data. The computations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.
A study of hydrogen diffusion flames using PDF turbulence model
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
The application of probability density function (pdf) turbulence models is addressed in this work. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional CFD flow solver with the Monte Carlo simulation of the pdf evolution equation has been developed. The algorithm has been validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames has been carried out using this algorithm. Numerical results compared favorably with experimental data. The computuations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.
Torsional anharmonicity in the conformational thermodynamics of flexible molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F., III; Clary, David C.
We present an algorithm for calculating the conformational thermodynamics of large, flexible molecules that combines ab initio electronic structure theory calculations with a torsional path integral Monte Carlo (TPIMC) simulation. The new algorithm overcomes the previous limitations of the TPIMC method by including the thermodynamic contributions of non-torsional vibrational modes and by affordably incorporating the ab initio calculation of conformer electronic energies, and it improves the conventional ab initio treatment of conformational thermodynamics by accounting for the anharmonicity of the torsional modes. Using previously published ab initio results and new TPIMC calculations, we apply the algorithm to the conformers of the adrenaline molecule.
Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data
NASA Astrophysics Data System (ADS)
Chierici, F.; Embriaco, D.; Morucci, S.
2017-12-01
Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Chattopadhyay, Aditya; Zheng, Min; Waller, Mark Paul; Priyakumar, U Deva
2018-05-23
Knowledge of the structure and dynamics of biomolecules is essential for elucidating the underlying mechanisms of biological processes. Given the stochastic nature of many biological processes, like protein unfolding, it's almost impossible that two independent simulations will generate the exact same sequence of events, which makes direct analysis of simulations difficult. Statistical models like Markov Chains, transition networks etc. help in shedding some light on the mechanistic nature of such processes by predicting long-time dynamics of these systems from short simulations. However, such methods fall short in analyzing trajectories with partial or no temporal information, for example, replica exchange molecular dynamics or Monte Carlo simulations. In this work we propose a probabilistic algorithm, borrowing concepts from graph theory and machine learning, to extract reactive pathways from molecular trajectories in the absence of temporal data. A suitable vector representation was chosen to represent each frame in the macromolecular trajectory (as a series of interaction and conformational energies) and dimensionality reduction was performed using principal component analysis (PCA). The trajectory was then clustered using a density-based clustering algorithm, where each cluster represents a metastable state on the potential energy surface (PES) of the biomolecule under study. A graph was created with these clusters as nodes with the edges learnt using an iterative expectation maximization algorithm. The most reactive path is conceived as the widest path along this graph. We have tested our method on RNA hairpin unfolding trajectory in aqueous urea solution. Our method makes the understanding of the mechanism of unfolding in RNA hairpin molecule more tractable. As this method doesn't rely on temporal data it can be used to analyze trajectories from Monte Carlo sampling techniques and replica exchange molecular dynamics (REMD).
NASA Astrophysics Data System (ADS)
Sempau, Josep; Wilderman, Scott J.; Bielajew, Alex F.
2000-08-01
A new Monte Carlo (MC) algorithm, the `dose planning method' (DPM), and its associated computer program for simulating the transport of electrons and photons in radiotherapy class problems employing primary electron beams, is presented. DPM is intended to be a high-accuracy MC alternative to the current generation of treatment planning codes which rely on analytical algorithms based on an approximate solution of the photon/electron Boltzmann transport equation. For primary electron beams, DPM is capable of computing 3D dose distributions (in 1 mm3 voxels) which agree to within 1% in dose maximum with widely used and exhaustively benchmarked general-purpose public-domain MC codes in only a fraction of the CPU time. A representative problem, the simulation of 1 million 10 MeV electrons impinging upon a water phantom of 1283 voxels of 1 mm on a side, can be performed by DPM in roughly 3 min on a modern desktop workstation. DPM achieves this performance by employing transport mechanics and electron multiple scattering distribution functions which have been derived to permit long transport steps (of the order of 5 mm) which can cross heterogeneity boundaries. The underlying algorithm is a `mixed' class simulation scheme, with differential cross sections for hard inelastic collisions and bremsstrahlung events described in an approximate manner to simplify their sampling. The continuous energy loss approximation is employed for energy losses below some predefined thresholds, and photon transport (including Compton, photoelectric absorption and pair production) is simulated in an analogue manner. The δ-scattering method (Woodcock tracking) is adopted to minimize the computational costs of transporting photons across voxels.
Finding the Missing Physics: Simulating Polydisperse Polymer Melts
NASA Astrophysics Data System (ADS)
Rorrer, Nichoals; Dorgan, John
2014-03-01
A Monte Carlo algorithm has been developed to model polydisperse polymer melts. For the first time, this enables the specification of a predetermined molecular weight distribution for lattice based simulations. It is demonstrated how to map an arbitrary probability distributions onto a discrete number of chains residing on an fcc lattice. The resulting algorithm is able to simulate a wide variety of behaviors for polydisperse systems including confinement effects, shear flow, and parabolic flow. The dynamic version of the algorithm accurately captures Rouse dynamics for short polymer chains, and reptation-like dynamics for longer chain lengths.1 When polydispersity is introduced, smaller Rouse times and broadened the transition between different scaling regimes are observed. Rouse times also decrease under confinement for both polydisperse and monodisperse systems and chain length dependent migration effects are observed. The steady-state version of the algorithm enables the simulation of flow and when polydisperse systems are subject to parabolic (Poiseulle) flow, a migration phenomenon based on chain length is again present. These and other phenomena highlight the importance of including polydispersity in obtaining physically realistic simulations of polymeric melts. 1. Dorgan, J.R.; Rorrer, N.A.; Maupin, C.M., Macromolecules 2012, 45(21), 8833-8840. Work funded by the Fluid Dynamics program of the National Science Foundation under grant CBET-1067707.
NASA Astrophysics Data System (ADS)
Rose, Michael Benjamin
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
Organization and use of a Software/Hardware Avionics Research Program (SHARP)
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.; Kareemi, M. N.
1975-01-01
The organization and use is described of the software/hardware avionics research program (SHARP) developed to duplicate the automatic portion of the STOLAND simulator system, on a general-purpose computer system (i.e., IBM 360). The program's uses are: (1) to conduct comparative evaluation studies of current and proposed airborne and ground system concepts via single run or Monte Carlo simulation techniques, and (2) to provide a software tool for efficient algorithm evaluation and development for the STOLAND avionics computer.
2006-11-30
except in the simplest of circumstances. This belief has driven the com- putational research community to devise clever kinetic Monte Carlo ( KMC ... KMC rou- tine is very slow; cutting the error in half requires four times the number of simulations. Since a single simulation may contain huge numbers...subintervals [9–14]. Both approximation types, system partitioning and τ leaping, have been very successful in increasing the scope of problems to which KMC
NASA Astrophysics Data System (ADS)
Hu, Zixi; Yao, Zhewei; Li, Jinglai
2017-03-01
Many scientific and engineering problems require to perform Bayesian inference for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method.
Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan
2013-01-01
In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493
Magnetocaloric effect in Sr2CrIrO6 double perovskite: Monte Carlo simulation
NASA Astrophysics Data System (ADS)
El Rhazouani, O.; Slassi, A.; Ziat, Y.; Benyoussef, A.
2017-05-01
Monte Carlo simulation (MCS) combined with the Metropolis algorithm has been performed to study the magnetocaloric effect (MCE) in the promising double perovskite (DP) Sr2CrIrO6 that has not so far been synthetized. This paper presents the global magneto-thermodynamic behavior of Sr2CrIrO6 compound in term of MCE and discusses the behavior in comparison to other DPs. Thermal dependence of the magnetization has been investigated for different values of reduced external magnetic field. Thermal magnetic entropy and its change have been obtained. The adiabatic temperature change and the relative cooling power have been established. Through the obtained results, Sr2CrIrO6 DP could have some potential applications for magnetic refrigeration over a wide temperature range above room temperature and at large magnetic fields.
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mabhouti, H; Sanli, E; Cebe, M
Purpose: Brain stereotactic radiosurgery involves the use of precisely directed, single session radiation to create a desired radiobiologic response within the brain target with acceptable minimal effects on surrounding structures or tissues. In this study, the dosimetric comparison of Truebeam 2.0 and Cyberknife M6 treatment plans were made. Methods: For Truebeam 2.0 machine, treatment planning were done using 2 full arc VMAT technique with 6 FFF beam on the CT scan of Randophantom simulating the treatment of sterotactic treatments for one brain metastasis. The dose distribution were calculated using Eclipse treatment planning system with Acuros XB algorithm. The treatment planningmore » of the same target were also done for Cyberknife M6 machine with Multiplan treatment planning system using Monte Carlo algorithm. Using the same film batch, the net OD to dose calibration curve was obtained using both machine by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution were measured using EBT3 film dosimeter. The measured and calculated doses were compared. Results: The dose distribution in the target and 2 cm beyond the target edge were calculated on TPSs and measured using EBT3 film. For cyberknife plans, the gamma analysis passing rates between measured and calculated dose distributions were 99.2% and 96.7% for target and peripheral region of target respectively. For Truebeam plans, the gamma analysis passing rates were 99.1% and 95.5% for target and peripheral region of target respectively. Conclusion: Although, target dose distribution calculated accurately by Acuros XB and Monte Carlo algorithms, Monte carlo calculation algorithm predicts dose distribution around the peripheral region of target more accurately than Acuros algorithm.« less
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less
An Aircraft Separation Algorithm with Feedback and Perturbation
NASA Technical Reports Server (NTRS)
White, Allan L.
2010-01-01
A separation algorithm is a set of rules that tell aircraft how to maneuver in order to maintain a minimum distance between them. This paper investigates demonstrating that separation algorithms satisfy the FAA requirement for the occurrence of incidents by means of simulation. Any demonstration that a separation algorithm, or any other aspect of flight, satisfies the FAA requirement is a challenge because of the stringent nature of the requirement and the complexity of airspace operations. The paper begins with a probability and statistical analysis of both the FAA requirement and demonstrating meeting it by a Monte Carlo approach. It considers the geometry of maintaining separation when one plane must change its flight path. It then develops a simple feedback control law that guides the planes on their paths. The presence of feedback control permits the introduction of perturbations, and the stochastic nature of the chosen perturbation is examined. The simulation program is described. This paper is an early effort in the realistic demonstration of a stringent requirement. Much remains to be done.
GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-04-01
Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krein, Gastao; Leme, Rafael R.; Woitek, Marcio
Traditional Monte Carlo simulations of QCD in the presence of a baryon chemical potential are plagued by the complex phase problem and new numerical approaches are necessary for studying the phase diagram of the theory. In this work we consider a Z{sub 3} Polyakov loop model for the deconfining phase transition in QCD and discuss how a flux representation of the model in terms of dimer and monomer variable solves the complex action problem. We present results of numerical simulations using a worm algorithm for the specific heat and two-point correlation function of Polyakov loops. Evidences of a first ordermore » deconfinement phase transition are discussed.« less
Performance bounds for matched field processing in subsurface object detection applications
NASA Astrophysics Data System (ADS)
Sahin, Adnan; Miller, Eric L.
1998-09-01
In recent years there has been considerable interest in the use of ground penetrating radar (GPR) for the non-invasive detection and localization of buried objects. In a previous work, we have considered the use of high resolution array processing methods for solving these problems for measurement geometries in which an array of electromagnetic receivers observes the fields scattered by the subsurface targets in response to a plane wave illumination. Our approach uses the MUSIC algorithm in a matched field processing (MFP) scheme to determine both the range and the bearing of the objects. In this paper we derive the Cramer-Rao bounds (CRB) for this MUSIC-based approach analytically. Analysis of the theoretical CRB has shown that there exists an optimum inter-element spacing of array elements for which the CRB is minimum. Furthermore, the optimum inter-element spacing minimizing CRB is smaller than the conventional half wavelength criterion. The theoretical bounds are then verified for two estimators using Monte-Carlo simulations. The first estimator is the MUSIC-based MFP and the second one is the maximum likelihood based MFP. The two approaches differ in the cost functions they optimize. We observe that Monte-Carlo simulated error variances always lie above the values established by CRB. Finally, we evaluate the performance of our MUSIC-based algorithm in the presence of model mismatches. Since the detection algorithm strongly depends on the model used, we have tested the performance of the algorithm when the object radius used in the model is different from the true radius. This analysis reveals that the algorithm is still capable of localizing the objects with a bias depending on the degree of mismatch.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
NASA Astrophysics Data System (ADS)
Niu, Yingli; Li, Wenqiang; Peng, Qian; Geng, Hua; Yi, Yuanping; Wang, Linjun; Nan, Guangjun; Wang, Dong; Shuai, Zhigang
2018-04-01
MOlecular MAterials Property Prediction Package (MOMAP) is a software toolkit for molecular materials property prediction. It focuses on luminescent properties and charge mobility properties. This article contains a brief descriptive introduction of key features, theoretical models and algorithms of the software, together with examples that illustrate the performance. First, we present the theoretical models and algorithms for molecular luminescent properties calculation, which includes the excited-state radiative/non-radiative decay rate constant and the optical spectra. Then, a multi-scale simulation approach and its algorithm for the molecular charge mobility are described. This approach is based on hopping model and combines with Kinetic Monte Carlo and molecular dynamics simulations, and it is especially applicable for describing a large category of organic semiconductors, whose inter-molecular electronic coupling is much smaller than intra-molecular charge reorganisation energy.
Simulation study into the identification of nuclear materials in cargo containers using cosmic rays
NASA Astrophysics Data System (ADS)
Blackwell, T. B.; Kudryavtsev, V. A.
2015-04-01
Muon tomography represents a new type of imaging technique that can be used in detecting high-Z materials. Monte Carlo simulations for muon scattering in different types of target materials are presented. The dependence of the detector capability to identify high-Z targets on spatial resolution has been studied. Muon tracks are reconstructed using a basic point of closest approach (PoCA) algorithm. In this article we report the development of a secondary analysis algorithm that is applied to the reconstructed PoCA points. This algorithm efficiently ascertains clusters of voxels with high average scattering angles to identify `areas of interest' within the inspected volume. Using this approach the effect of other parameters, such as the distance between detectors and the number of detectors per set, on material identification is also presented. Finally, false positive and false negative rates for detecting shielded HEU in realistic scenarios with low-Z clutter are presented.
Guidance Concept for a Mars Ascent Vehicle First Stage
NASA Technical Reports Server (NTRS)
Queen, Eric M.
2000-01-01
This paper presents a guidance concept for use on the first stage of a Mars Ascent Vehicle (MAV). The guidance is based on a calculus of variations approach similar to that used for the final phase of the Apollo Earth return guidance. A three degree-of-freedom (3DOF) Monte Carlo simulation is used to evaluate performance and robustness of the algorithm.
Sequential Geoacoustic Filtering and Geoacoustic Inversion
2015-09-30
and online algorithms. We show here that CS obtains higher resolution than MVDR, even in scenarios, which favor classical high-resolution methods...windows actually performs better than conventional beamforming and MVDR/ MUSIC (see Figs. 1-2). Compressive geoacoustic inversion Geoacoustic...histograms based on 100 Monte Carlo simulations, and c)(CS, exhaustive-search, CBF, MVDR, and MUSIC performance versus SNR. The true source positions
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali mohammad
2014-01-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14% reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali Mohammad
2014-05-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
Mastering the game of Go with deep neural networks and tree search
NASA Astrophysics Data System (ADS)
Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis
2016-01-01
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Mastering the game of Go with deep neural networks and tree search.
Silver, David; Huang, Aja; Maddison, Chris J; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis
2016-01-28
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo
White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; ...
2015-07-07
Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less
Monte Carlo simulation of a noisy quantum channel with memory.
Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco
2015-10-01
The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.
Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei
2015-04-11
To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.
Monte Carlo simulations on marker grouping and ordering.
Wu, J; Jenkins, J; Zhu, J; McCarty, J; Watson, C
2003-08-01
Four global algorithms, maximum likelihood (ML), sum of adjacent LOD score (SALOD), sum of adjacent recombinant fractions (SARF) and product of adjacent recombinant fraction (PARF), and one approximation algorithm, seriation (SER), were used to compare the marker ordering efficiencies for correctly given linkage groups based on doubled haploid (DH) populations. The Monte Carlo simulation results indicated the marker ordering powers for the five methods were almost identical. High correlation coefficients were greater than 0.99 between grouping power and ordering power, indicating that all these methods for marker ordering were reliable. Therefore, the main problem for linkage analysis was how to improve the grouping power. Since the SER approach provided the advantage of speed without losing ordering power, this approach was used for detailed simulations. For more generality, multiple linkage groups were employed, and population size, linkage cutoff criterion, marker spacing pattern (even or uneven), and marker spacing distance (close or loose) were considered for obtaining acceptable grouping powers. Simulation results indicated that the grouping power was related to population size, marker spacing distance, and cutoff criterion. Generally, a large population size provided higher grouping power than small population size, and closely linked markers provided higher grouping power than loosely linked markers. The cutoff criterion range for achieving acceptable grouping power and ordering power differed for varying cases; however, combining all situations in this study, a cutoff criterion ranging from 50 cM to 60 cM was recommended for achieving acceptable grouping power and ordering power for different cases.
Lattice based Kinetic Monte Carlo Simulations of a complex chemical reaction network
NASA Astrophysics Data System (ADS)
Danielson, Thomas; Savara, Aditya; Hin, Celine
Lattice Kinetic Monte Carlo (KMC) simulations offer a powerful alternative to using ordinary differential equations for the simulation of complex chemical reaction networks. Lattice KMC provides the ability to account for local spatial configurations of species in the reaction network, resulting in a more detailed description of the reaction pathway. In KMC simulations with a large number of reactions, the range of transition probabilities can span many orders of magnitude, creating subsets of processes that occur more frequently or more rarely. Consequently, processes that have a high probability of occurring may be selected repeatedly without actually progressing the system (i.e. the forward and reverse process for the same reaction). In order to avoid the repeated occurrence of fast frivolous processes, it is necessary to throttle the transition probabilities in such a way that avoids altering the overall selectivity. Likewise, as the reaction progresses, new frequently occurring species and reactions may be introduced, making a dynamic throttling algorithm a necessity. We present a dynamic steady-state detection scheme with the goal of accurately throttling rate constants in order to optimize the KMC run time without compromising the selectivity of the reaction network. The algorithm has been applied to a large catalytic chemical reaction network, specifically that of methanol oxidative dehydrogenation, as well as additional pathways on CeO2(111) resulting in formaldehyde, CO, methanol, CO2, H2 and H2O as gas products.
An off-lattice, self-learning kinetic Monte Carlo method using local environments.
Konwar, Dhrubajit; Bhute, Vijesh J; Chatterjee, Abhijit
2011-11-07
We present a method called local environment kinetic Monte Carlo (LE-KMC) method for efficiently performing off-lattice, self-learning kinetic Monte Carlo (KMC) simulations of activated processes in material systems. Like other off-lattice KMC schemes, new atomic processes can be found on-the-fly in LE-KMC. However, a unique feature of LE-KMC is that as long as the assumption that all processes and rates depend only on the local environment is satisfied, LE-KMC provides a general algorithm for (i) unambiguously describing a process in terms of its local atomic environments, (ii) storing new processes and environments in a catalog for later use with standard KMC, and (iii) updating the system based on the local information once a process has been selected for a KMC move. Search, classification, storage and retrieval steps needed while employing local environments and processes in the LE-KMC method are discussed. The advantages and computational cost of LE-KMC are discussed. We assess the performance of the LE-KMC algorithm by considering test systems involving diffusion in a submonolayer Ag and Ag-Cu alloy films on Ag(001) surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Englbrecht, F; Lindner, F; Bin, J
2016-06-15
Purpose: To measure and simulate well-defined electron spectra using a linear accelerator and a permanent-magnetic wide-angle spectrometer to test the performance of a novel reconstruction algorithm for retrieval of unknown electron-sources, in view of application to diagnostics of laser-driven particle acceleration. Methods: Six electron energies (6, 9, 12, 15, 18 and 21 MeV, 40cm × 40cm field-size) delivered by a Siemens Oncor linear accelerator were recorded using a permanent-magnetic wide-angle electron spectrometer (150mT) with a one dimensional slit (0.2mm × 5cm). Two dimensional maps representing beam-energy and entrance-position along the slit were measured using different scintillating screens, read by anmore » online CMOS detector of high resolution (0.048mm × 0.048mm pixels) and large field of view (5cm × 10cm). Measured energy-slit position maps were compared to forward FLUKA simulations of electron transport through the spectrometer, starting from IAEA phase-spaces of the accelerator. The latter ones were validated against measured depth-dose and lateral profiles in water. Agreement of forward simulation and measurement was quantified in terms of position and shape of the signal distribution on the detector. Results: Measured depth-dose distributions and lateral profiles in the water phantom showed good agreement with forward simulations of IAEA phase-spaces, thus supporting usage of this simulation source in the study. Measured energy-slit position maps and those obtained by forward Monte-Carlo simulations showed satisfactory agreement in shape and position. Conclusion: Well-defined electron beams of known energy and shape will provide an ideal scenario to study the performance of a novel reconstruction algorithm using measured and simulated signal. Future work will increase the stability and convergence of the reconstruction-algorithm for unknown electron sources, towards final application to the electrons which drive the interaction of TW-class laser pulses with nanometer thin target foils to accelerate protons and ions to multi-MeV kinetic energy. Cluster of Excellence of the German Research Foundation (DFG) “Munich-Centre for Advanced Photonics”.« less
Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.
1997-12-01
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Liborio I., E-mail: liborio78@gmail.com
A new Markov Chain Monte Carlo method for simulating the dynamics of particle systems characterized by hard-core interactions is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Montemore » Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.« less
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony
2016-08-01
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.
Ono, Kaoru; Endo, Satoru; Tanaka, Kenichi; Hoshi, Masaharu; Hirokawa, Yutaka
2010-01-01
Purpose: In this study, the authors evaluated the accuracy of dose calculations performed by the convolution∕superposition based anisotropic analytical algorithm (AAA) in lung equivalent heterogeneities with and without bone equivalent heterogeneities. Methods: Calculations of PDDs using the AAA and Monte Carlo simulations (MCNP4C) were compared to ionization chamber measurements with a heterogeneous phantom consisting of lung equivalent and bone equivalent materials. Both 6 and 10 MV photon beams of 4×4 and 10×10 cm2 field sizes were used for the simulations. Furthermore, changes of energy spectrum with depth for the heterogeneous phantom using MCNP were calculated. Results: The ionization chamber measurements and MCNP calculations in a lung equivalent phantom were in good agreement, having an average deviation of only 0.64±0.45%. For both 6 and 10 MV beams, the average deviation was less than 2% for the 4×4 and 10×10 cm2 fields in the water-lung equivalent phantom and the 4×4 cm2 field in the water-lung-bone equivalent phantom. Maximum deviations for the 10×10 cm2 field in the lung equivalent phantom before and after the bone slab were 5.0% and 4.1%, respectively. The Monte Carlo simulation demonstrated an increase of the low-energy photon component in these regions, more for the 10×10 cm2 field compared to the 4×4 cm2 field. Conclusions: The low-energy photon by Monte Carlo simulation component increases sharply in larger fields when there is a significant presence of bone equivalent heterogeneities. This leads to great changes in the build-up and build-down at the interfaces of different density materials. The AAA calculation modeling of the effect is not deemed to be sufficiently accurate. PMID:20879604
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.
Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony
2016-08-21
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has been extended using the developed parameterizations in order to calculate the neutron energy, quality factor and RBE.
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1998-01-01
This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less
NASA Astrophysics Data System (ADS)
Aldana, S.; Roldán, J. B.; García-Fernández, P.; Suñe, J.; Romero-Zaliz, R.; Jiménez-Molinos, F.; Long, S.; Gómez-Campos, F.; Liu, M.
2018-04-01
A simulation tool based on a 3D kinetic Monte Carlo algorithm has been employed to analyse bipolar conductive bridge RAMs fabricated with Cu/HfOx/Pt stacks. Resistive switching mechanisms are described accounting for the electric field and temperature distributions within the dielectric. The formation and destruction of conductive filaments (CFs) are analysed taking into consideration redox reactions and the joint action of metal ion thermal diffusion and electric field induced drift. Filamentary conduction is considered when different percolation paths are formed in addition to other conventional transport mechanisms in dielectrics. The simulator was tuned by using the experimental data for Cu/HfOx/Pt bipolar devices that were fabricated. Our simulation tool allows for the study of different experimental results, in particular, the current variations due to the electric field changes between the filament tip and the electrode in the High Resistance State. In addition, the density of metallic atoms within the CF can also be characterized along with the corresponding CF resistance description.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Worm Algorithm simulations of the hole dynamics in the t-J model
NASA Astrophysics Data System (ADS)
Prokof'ev, Nikolai; Ruebenacker, Oliver
2001-03-01
In the limit of small J << t, relevant for HTSC materials and Mott-Hubbard systems, computer simulations have to be performed for large systems and at low temperatures. Despite convincing evidence against spin-charge separation obtained by various methods for J > 0.4t there is an ongoing argument that at smaller J spin-charge separation is still possible. Worm algorithm Monte Carlo simulations of the hole Green function for 0.1 < J/t < 0.4 were performed on lattices with up to 32x32 sites, and at temperature J/T = 40 (for the largest size). Spectral analysis reveals a single, delta-function sharp quasiparticle peak at the lowest edge of the spectrum and two distinct peaks above it at all studied J. We rule out the possibility of spin-charge separation in this parameter range, and present, apparently, the hole spectral function in the thermodynamic limit.
WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization
NASA Astrophysics Data System (ADS)
Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.
SU-F-T-370: A Fast Monte Carlo Dose Engine for Gamma Knife
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, T; Zhou, L; Li, Y
2016-06-15
Purpose: To develop a fast Monte Carlo dose calculation algorithm for Gamma Knife. Methods: To make the simulation more efficient, we implemented the track repeating technique on GPU. We first use EGSnrc to pre-calculate the photon and secondary electron tracks in water from two mono-energy photons of 60Co. The total photon mean free paths for different materials and energies are obtained from NIST. During simulation, each entire photon track was first loaded to shared memory for each block, the incident original photon was then splitted to Nthread sub-photons, each thread transport one sub-photon, the Russian roulette technique was applied formore » scattered and bremsstrahlung photons. The resultant electrons from photon interactions are simulated by repeating the recorded electron tracks. The electron step length is stretched/shrunk proportionally based on the local density and stopping power ratios of the local material. Energy deposition in a voxel is proportional to the fraction of the equivalent step length in that voxel. To evaluate its accuracy, dose deposition in a 300mm*300mm*300mm water phantom is calculated, and compared to EGSnrc results. Results: Both PDD and OAR showed great agreements (within 0.5%) between our dose engine result and the EGSnrc result. It only takes less than 1 min for every simulation, being reduced up to ∼40 times compared to EGSnrc simulations. Conclusion: We have successfully developed a fast Monte Carlo dose engine for Gamma Knife.« less
A new method for photon transport in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Sato, T.; Ogawa, K.
1999-12-01
Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; ...
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16
Gyro and accelerometer failure detection and identification in redundant sensor systems
NASA Technical Reports Server (NTRS)
Potter, J. E.; Deckert, J. C.
1972-01-01
Algorithms for failure detection and identification for redundant noncolinear arrays of single degree of freedom gyros and accelerometers are described. These algorithms are optimum in the sense that detection occurs as soon as it is no longer possible to account for the instrument outputs as the outputs of good instruments operating within their noise tolerances, and identification occurs as soon as it is true that only a particular instrument failure could account for the actual instrument outputs within the noise tolerance of good instruments. An estimation algorithm is described which minimizes the maximum possible estimation error magnitude for the given set of instrument outputs. Monte Carlo simulation results are presented for the application of the algorithms to an inertial reference unit consisting of six gyros and six accelerometers in two alternate configurations.
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
3D Space Radiation Transport in a Shielded ICRU Tissue Sphere
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.
NASA Astrophysics Data System (ADS)
Russkova, Tatiana V.
2017-11-01
One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.
A versatile multi-objective FLUKA optimization using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vlachoudis, Vasilis; Antoniucci, Guido Arnau; Mathot, Serge; Kozlowska, Wioletta Sandra; Vretenar, Maurizio
2017-09-01
Quite often Monte Carlo simulation studies require a multi phase-space optimization, a complicated task, heavily relying on the operator experience and judgment. Examples of such calculations are shielding calculations with stringent conditions in the cost, in residual dose, material properties and space available, or in the medical field optimizing the dose delivered to a patient under a hadron treatment. The present paper describes our implementation inside flair[1] the advanced user interface of FLUKA[2,3] of a multi-objective Genetic Algorithm[Erreur ! Source du renvoi introuvable.] to facilitate the search for the optimum solution.
Jurling, Alden S; Fienup, James R
2014-03-01
Extending previous work by Thurman on wavefront sensing for segmented-aperture systems, we developed an algorithm for estimating segment tips and tilts from multiple point spread functions in different defocused planes. We also developed methods for overcoming two common modes for stagnation in nonlinear optimization-based phase retrieval algorithms for segmented systems. We showed that when used together, these methods largely solve the capture range problem in focus-diverse phase retrieval for segmented systems with large tips and tilts. Monte Carlo simulations produced a rate of success better than 98% for the combined approach.
Angle-of-Attack-Modulated Terminal Point Control for Neptune Aerocapture
NASA Technical Reports Server (NTRS)
Queen, Eric M.
2004-01-01
An aerocapture guidance algorithm based on a calculus of variations approach is developed, using angle of attack as the primary control variable. Bank angle is used as a secondary control to alleviate angle of attack extremes and to control inclination. The guidance equations are derived in detail. The controller has very small onboard computational requirements and is robust to atmospheric and aerodynamic dispersions. The algorithm is applied to aerocapture at Neptune. Three versions of the controller are considered with varying angle of attack authority. The three versions of the controller are evaluated using Monte Carlo simulations with expected dispersions.
Evaluating the performance of distributed approaches for modal identification
NASA Astrophysics Data System (ADS)
Krishnan, Sriram S.; Sun, Zhuoxiong; Irfanoglu, Ayhan; Dyke, Shirley J.; Yan, Guirong
2011-04-01
In this paper two modal identification approaches appropriate for use in a distributed computing environment are applied to a full-scale, complex structure. The natural excitation technique (NExT) is used in conjunction with a condensed eigensystem realization algorithm (ERA), and the frequency domain decomposition with peak-picking (FDD-PP) are both applied to sensor data acquired from a 57.5-ft, 10 bay highway sign truss structure. Monte-Carlo simulations are performed on a numerical example to investigate the statistical properties and sensitivity to noise of the two distributed algorithms. Experimental results are provided and discussed.
Suppressing correlations in massively parallel simulations of lattice models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2017-11-01
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*
Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G
2014-01-01
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image-based dosimetry in nuclear medicine. PMID:24200697
Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweezy, Jeremy Ed
2016-01-21
The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less
Speed Control Law for Precision Terminal Area In-Trail Self Spacing
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2002-01-01
This document describes a speed control law for precision in-trail airborne self-spacing during final approach. This control law was designed to provide an operationally viable means to obtain a desired runway threshold crossing time or minimum distance, one aircraft relative to another. The control law compensates for dissimilar final approach speeds between aircraft pairs and provides guidance for a stable final approach. This algorithm has been extensively tested in Monte Carlo simulation and has been evaluated in piloted simulation, with preliminary results indicating acceptability from operational and workload standpoints.
NASA Astrophysics Data System (ADS)
Yang, Shengfeng; Zhou, Naixie; Zheng, Hui; Ong, Shyue Ping; Luo, Jian
2018-02-01
First-order interfacial phaselike transformations that break the mirror symmetry of the symmetric ∑5 (210 ) tilt grain boundary (GB) are discovered by combining a modified genetic algorithm with hybrid Monte Carlo and molecular dynamics simulations. Density functional theory calculations confirm this prediction. This first-order coupled structural and adsorption transformation, which produces two variants of asymmetric bilayers, vanishes at an interfacial critical point. A GB complexion (phase) diagram is constructed via semigrand canonical ensemble atomistic simulations for the first time.
Simulation of dense amorphous polymers by generating representative atomistic models
NASA Astrophysics Data System (ADS)
Curcó, David; Alemán, Carlos
2003-08-01
A method for generating atomistic models of dense amorphous polymers is presented. The generated models can be used as starting structures of Monte Carlo and molecular dynamics simulations, but also are suitable for the direct evaluation physical properties. The method is organized in a two-step procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, an iterative algorithm is applied to relax the nonbonding interactions. In order to check the performance of the method we examined structure-dependent properties for three polymeric systems: polyethyelene (ρ=0.85 g/cm3), poly(L,D-lactic) acid (ρ=1.25 g/cm3), and polyglycolic acid (ρ=1.50 g/cm3). The method successfully generated representative packings for such dense systems using minimum computational resources.
Analysis and optimization of population annealing
NASA Astrophysics Data System (ADS)
Amey, Christopher; Machta, Jonathan
2018-03-01
Population annealing is an easily parallelizable sequential Monte Carlo algorithm that is well suited for simulating the equilibrium properties of systems with rough free-energy landscapes. In this work we seek to understand and improve the performance of population annealing. We derive several useful relations between quantities that describe the performance of population annealing and use these relations to suggest methods to optimize the algorithm. These optimization methods were tested by performing large-scale simulations of the three-dimensional (3D) Edwards-Anderson (Ising) spin glass and measuring several observables. The optimization methods were found to substantially decrease the amount of computational work necessary as compared to previously used, unoptimized versions of population annealing. We also obtain more accurate values of several important observables for the 3D Edwards-Anderson model.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6
2013-01-15
Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less
NASA Astrophysics Data System (ADS)
Sheng, Zheng
2013-02-01
The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.
SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enright, S; Asprinio, A; Lu, L
2014-06-01
Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. Allmore » phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.« less
Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R
2014-03-01
Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng
2013-07-01
High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.
NASA Astrophysics Data System (ADS)
Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles
2017-10-01
RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance = 3%, distance-to-agreement = 3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.
Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles
2017-09-12
RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance = 3%, distance-to-agreement = 3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.
Hyper-Parallel Tempering Monte Carlo Method and It's Applications
NASA Astrophysics Data System (ADS)
Yan, Qiliang; de Pablo, Juan
2000-03-01
A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Cross-platform validation and analysis environment for particle physics
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.
2017-11-01
A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.
PBMC: Pre-conditioned Backward Monte Carlo code for radiative transport in planetary atmospheres
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2017-08-01
PBMC (Pre-Conditioned Backward Monte Carlo) solves the vector Radiative Transport Equation (vRTE) and can be applied to planetary atmospheres irradiated from above. The code builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. In accounting for the polarization in the sampling of photon propagation directions and pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions, PBMC avoids the unstable and biased solutions of classical BMC algorithms for conservative, optically-thick, strongly-polarizing media such as Rayleigh atmospheres.
Reactive Monte Carlo sampling with an ab initio potential
NASA Astrophysics Data System (ADS)
Leiding, Jeff; Coe, Joshua D.
2016-05-01
We present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). We find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the "rare-event" character of chemical reactions.
NASA Astrophysics Data System (ADS)
Plumer, M. L.; Almudallal, A. M.; Mercer, J. I.; Whitehead, J. P.; Fal, T. J.
The kinetic Monte Carlo (KMC) method developed for thermally activated magnetic reversal processes in single-layer recording media has been extended to study dual-layer Exchange Coupled Composition (ECC) media used in current and next generations of disc drives. The attempt frequency is derived from the Langer formalism with the saddle point determined using a variant of Bellman Ford algorithm. Complication (such as stagnation) arising from coupled grains having metastable states are addressed. MH-hysteresis loops are calculated over a wide range of anisotropy ratios, sweep rates and inter-layer coupling parameter. Results are compared with standard micromagnetics at fast sweep rates and experimental results at slow sweep rates.
Monte Carlo Study of Four-Dimensional Self-avoiding Walks of up to One Billion Steps
NASA Astrophysics Data System (ADS)
Clisby, Nathan
2018-04-01
We study self-avoiding walks on the four-dimensional hypercubic lattice via Monte Carlo simulations of walks with up to one billion steps. We study the expected logarithmic corrections to scaling, and find convincing evidence in support the scaling form predicted by the renormalization group, with an estimate for the power of the logarithmic factor of 0.2516(14), which is consistent with the predicted value of 1/4. We also characterize the behaviour of the pivot algorithm for sampling four dimensional self-avoiding walks, and conjecture that the probability of a pivot move being successful for an N-step walk is O([ log N ]^{-1/4}).
NASA Astrophysics Data System (ADS)
De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.
2013-02-01
We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.
NASA Astrophysics Data System (ADS)
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
Simulation of a small muon tomography station system based on RPCs
NASA Astrophysics Data System (ADS)
Chen, S.; Li, Q.; Ma, J.; Kong, H.; Ye, Y.; Gao, J.; Jiang, Y.
2014-10-01
In this work, Monte Carlo simulations were used to study the performance of a small muon Tomography Station based on four glass resistive plate chambers(RPCs) with a spatial resolution of approximately 1.0mm (FWHM). We developed a simulation code to generate cosmic ray muons with the appropriate distribution of energies and angles. PoCA and EM algorithm were used to rebuild the objects for comparison. We compared Z discrimination time with and without muon momentum measurement. The relation between Z discrimination time and spatial resolution was also studied. Simulation results suggest that mean scattering angle is a better Z indicator and upgrading to larger RPCs will improve reconstruction image quality.
NASA Astrophysics Data System (ADS)
Zhuang, Yufei; Huang, Haibin
2014-02-01
A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.
Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.
Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza
2012-05-01
Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better control than forward integration over the planet region contributing to the solution, and this presents a clear advantage when estimating the disk-integrated signal at moderate and large phase angles. A one-slab, plane-parallel version of the PBMC algorithm is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A72
Wang, Lei; Troyer, Matthias
2014-09-12
We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice.
Preserving the Boltzmann ensemble in replica-exchange molecular dynamics.
Cooke, Ben; Schmidler, Scott C
2008-10-28
We consider the convergence behavior of replica-exchange molecular dynamics (REMD) [Sugita and Okamoto, Chem. Phys. Lett. 314, 141 (1999)] based on properties of the numerical integrators in the underlying isothermal molecular dynamics (MD) simulations. We show that a variety of deterministic algorithms favored by molecular dynamics practitioners for constant-temperature simulation of biomolecules fail either to be measure invariant or irreducible, and are therefore not ergodic. We then show that REMD using these algorithms also fails to be ergodic. As a result, the entire configuration space may not be explored even in an infinitely long simulation, and the simulation may not converge to the desired equilibrium Boltzmann ensemble. Moreover, our analysis shows that for initial configurations with unfavorable energy, it may be impossible for the system to reach a region surrounding the minimum energy configuration. We demonstrate these failures of REMD algorithms for three small systems: a Gaussian distribution (simple harmonic oscillator dynamics), a bimodal mixture of Gaussians distribution, and the alanine dipeptide. Examination of the resulting phase plots and equilibrium configuration densities indicates significant errors in the ensemble generated by REMD simulation. We describe a simple modification to address these failures based on a stochastic hybrid Monte Carlo correction, and prove that this is ergodic.
Hazard Detection Software for Lunar Landing
NASA Technical Reports Server (NTRS)
Huertas, Andres; Johnson, Andrew E.; Werner, Robert A.; Montgomery, James F.
2011-01-01
The Autonomous Landing and Hazard Avoidance Technology (ALHAT) Project is developing a system for safe and precise manned lunar landing that involves novel sensors, but also specific algorithms. ALHAT has selected imaging LIDAR (light detection and ranging) as the sensing modality for onboard hazard detection because imaging LIDARs can rapidly generate direct measurements of the lunar surface elevation from high altitude. Then, starting with the LIDAR-based Hazard Detection and Avoidance (HDA) algorithm developed for Mars Landing, JPL has developed a mature set of HDA software for the manned lunar landing problem. Landing hazards exist everywhere on the Moon, and many of the more desirable landing sites are near the most hazardous terrain, so HDA is needed to autonomously and safely land payloads over much of the lunar surface. The HDA requirements used in the ALHAT project are to detect hazards that are 0.3 m tall or higher and slopes that are 5 or greater. Steep slopes, rocks, cliffs, and gullies are all hazards for landing and, by computing the local slope and roughness in an elevation map, all of these hazards can be detected. The algorithm in this innovation is used to measure slope and roughness hazards. In addition to detecting these hazards, the HDA capability also is able to find a safe landing site free of these hazards for a lunar lander with diameter .15 m over most of the lunar surface. This software includes an implementation of the HDA algorithm, software for generating simulated lunar terrain maps for testing, hazard detection performance analysis tools, and associated documentation. The HDA software has been deployed to Langley Research Center and integrated into the POST II Monte Carlo simulation environment. The high-fidelity Monte Carlo simulations determine the required ground spacing between LIDAR samples (ground sample distances) and the noise on the LIDAR range measurement. This simulation has also been used to determine the effect of viewing on hazard detection performance. The software has also been deployed to Johnson Space Center and integrated into the ALHAT real-time Hardware-in-the-Loop testbed.
Chen, Yunjie; Roux, Benoît
2015-08-11
Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water.
2015-01-01
Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water. PMID:26574442
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
Particle identification algorithms for the PANDA Endcap Disc DIRC
NASA Astrophysics Data System (ADS)
Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.
2017-12-01
The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.
RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing
NASA Astrophysics Data System (ADS)
Gui, Guan; Xu, Li; Adachi, Fumiyuki
2014-12-01
Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, D; Jung, J; Suh, T
2014-06-01
Purpose: Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography (SPECT) image from boron neutron capture therapy (BNCT) using Monte Carlo simulation. Methods: In case of simulation, the pixelated SPECT detector, collimator and phantom were simulated using Monte Carlo n particle extended (MCNPX) simulation tool. A thermal neutron source (<1 eV) was used to react with the boron uptake region (BUR) in the phantom. Each geometry had a spherical pattern, and three different BURs (A, B and C region, density: 2.08 g/cm3) were located in the middle of the brain phantom.more » The data from 128 projections for each sorting process were used to achieve image reconstruction. The ordered subset expectation maximization (OSEM) reconstruction algorithm was used to obtain a tomographic image with eight subsets and five iterations. The receiver operating characteristic (ROC) curve analysis was used to evaluate the geometric accuracy of reconstructed image. Results: The OSEM image was compared with the original phantom pattern image. The area under the curve (AUC) was calculated as the gross area under each ROC curve. The three calculated AUC values were 0.738 (A region), 0.623 (B region), and 0.817 (C region). The differences between length of centers of two boron regions and distance of maximum count points were 0.3 cm, 1.6 cm and 1.4 cm. Conclusion: The possibility of extracting a 3D BNCT SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The prospects for obtaining an actual BNCT SPECT image were estimated from the quality of the simulated image and the simulation conditions. When multiple tumor region should be treated using the BNCT, a reasonable model to determine how many useful images can be obtained from the SPECT could be provided to the BNCT facilities. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.200900420) and the Radiation Technology Research and Development program (Grant No.2013043498), Republic of Korea.« less
A method for optimizing the cosine response of solar UV diffusers
NASA Astrophysics Data System (ADS)
Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki
2013-07-01
Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Akihiko, E-mail: takahsr@hs.med.kyushu-u.ac.jp; Sasaki, Masayuki; Himuro, Kazuhiko
2015-04-15
Purpose: Yittrium-90 ({sup 90}Y) is traditionally thought of as a pure beta emitter, and is used in targeted radionuclide therapy, with imaging performed using bremsstrahlung single-photon emission computed tomography (SPECT). However, because {sup 90}Y also emits positrons through internal pair production with a very small branching ratio, positron emission tomography (PET) imaging is also available. Because of the insufficient image quality of {sup 90}Y bremsstrahlung SPECT, PET imaging has been suggested as an alternative. In this paper, the authors present the Monte Carlo-based simulation–reconstruction framework for {sup 90}Y to comprehensively analyze the PET and SPECT imaging techniques and to quantitativelymore » consider the disadvantages associated with them. Methods: Our PET and SPECT simulation modules were developed using Monte Carlo simulation of Electrons and Photons (MCEP), developed by Dr. S. Uehara. PET code (MCEP-PET) generates a sinogram, and reconstructs the tomography image using a time-of-flight ordered subset expectation maximization (TOF-OSEM) algorithm with attenuation compensation. To evaluate MCEP-PET, simulated results of {sup 18}F PET imaging were compared with the experimental results. The results confirmed that MCEP-PET can simulate the experimental results very well. The SPECT code (MCEP-SPECT) models the collimator and NaI detector system, and generates the projection images and projection data. To save the computational time, the authors adopt the prerecorded {sup 90}Y bremsstrahlung photon data calculated by MCEP. The projection data are also reconstructed using the OSEM algorithm. The authors simulated PET and SPECT images of a water phantom containing six hot spheres filled with different concentrations of {sup 90}Y without background activity. The amount of activity was 163 MBq, with an acquisition time of 40 min. Results: The simulated {sup 90}Y-PET image accurately simulated the experimental results. PET image is visually superior to SPECT image because of the low background noise. The simulation reveals that the detected photon number in SPECT is comparable to that of PET, but the large fraction (approximately 75%) of scattered and penetration photons contaminates SPECT image. The lower limit of {sup 90}Y detection in SPECT image was approximately 200 kBq/ml, while that in PET image was approximately 100 kBq/ml. Conclusions: By comparing the background noise level and the image concentration profile of both the techniques, PET image quality was determined to be superior to that of bremsstrahlung SPECT. The developed simulation codes will be very useful in the future investigations of PET and bremsstrahlung SPECT imaging of {sup 90}Y.« less
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods
NASA Astrophysics Data System (ADS)
Koreň, Milan; Mokroš, Martin; Bucha, Tomáš
2017-12-01
This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.
Scattering properties of electromagnetic waves from metal object in the lower terahertz region
NASA Astrophysics Data System (ADS)
Chen, Gang; Dang, H. X.; Hu, T. Y.; Su, Xiang; Lv, R. C.; Li, Hao; Tan, X. M.; Cui, T. J.
2018-01-01
An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of metal objects in the lower terahertz (THz) frequency. The metal object can be viewed as perfectly electrical conducting object with a slightly rough surface in the lower THz region. Hence the THz scattered field from metal object can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are combined to compute the coherent part; while the small perturbation method is used for the incoherent part. With the MonteCarlo method, the radar cross section of the rough metal surface is computed by the multilevel fast multipole algorithm and the proposed hybrid algorithm, respectively. The numerical results show that the proposed algorithm has good accuracy to simulate the scattering properties rapidly in the lower THz region.
Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.
2015-12-01
A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.
Mille, Matthew M; Jung, Jae Won; Lee, Choonik; Kuzmin, Gleb A; Lee, Choonsik
2018-06-01
Radiation dosimetry is an essential input for epidemiological studies of radiotherapy patients aimed at quantifying the dose-response relationship of late-term morbidity and mortality. Individualised organ dose must be estimated for all tissues of interest located in-field, near-field, or out-of-field. Whereas conventional measurement approaches are limited to points in water or anthropomorphic phantoms, computational approaches using patient images or human phantoms offer greater flexibility and can provide more detailed three-dimensional dose information. In the current study, we systematically compared four different dose calculation algorithms so that dosimetrists and epidemiologists can better understand the advantages and limitations of the various approaches at their disposal. The four dose calculations algorithms considered were as follows: the (1) Analytical Anisotropic Algorithm (AAA) and (2) Acuros XB algorithm (Acuros XB), as implemented in the Eclipse treatment planning system (TPS); (3) a Monte Carlo radiation transport code, EGSnrc; and (4) an accelerated Monte Carlo code, the x-ray Voxel Monte Carlo (XVMC). The four algorithms were compared in terms of their accuracy and appropriateness in the context of dose reconstruction for epidemiological investigations. Accuracy in peripheral dose was evaluated first by benchmarking the calculated dose profiles against measurements in a homogeneous water phantom. Additional simulations in a heterogeneous cylinder phantom evaluated the performance of the algorithms in the presence of tissue heterogeneity. In general, we found that the algorithms contained within the commercial TPS (AAA and Acuros XB) were fast and accurate in-field or near-field, but not acceptable out-of-field. Therefore, the TPS is best suited for epidemiological studies involving large cohorts and where the organs of interest are located in-field or partially in-field. The EGSnrc and XVMC codes showed excellent agreement with measurements both in-field and out-of-field. The EGSnrc code was the most accurate dosimetry approach, but was too slow to be used for large-scale epidemiological cohorts. The XVMC code showed similar accuracy to EGSnrc, but was significantly faster, and thus epidemiological applications seem feasible, especially when the organs of interest reside far away from the field edge.
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-05-01
Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.
Reconstruction of Human Monte Carlo Geometry from Segmented Images
NASA Astrophysics Data System (ADS)
Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican
2014-06-01
Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Xu, X
Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based onmore » Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)« less
Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir
2009-11-01
Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Avuthu, V.; Galgalikar, R.; Zhang, Z.
2015-09-01
A thermo-mechanical finite element analysis of the friction stir welding (FSW) process is carried out and the evolution of the material state (e.g., temperature, the extent of plastic deformation, etc.) monitored. Subsequently, the finite-element results are used as input to a Monte-Carlo simulation algorithm in order to predict the evolution of the grain microstructure within different weld zones, during the FSW process and the subsequent cooling of the material within the weld to room temperature. To help delineate different weld zones, (a) temperature and deformation fields during the welding process, and during the subsequent cooling, are monitored; and (b) competition between the grain growth (driven by the reduction in the total grain-boundary surface area) and dynamic-recrystallization grain refinement (driven by the replacement of highly deformed material with an effectively "dislocation-free" material) is simulated. The results obtained clearly revealed that different weld zones form as a result of different outcomes of the competition between the grain growth and grain refinement processes.
Monte Carlo simulations in radiotherapy dosimetry.
Andreo, Pedro
2018-06-27
The use of the Monte Carlo (MC) method in radiotherapy dosimetry has increased almost exponentially in the last decades. Its widespread use in the field has converted this computer simulation technique in a common tool for reference and treatment planning dosimetry calculations. This work reviews the different MC calculations made on dosimetric quantities, like stopping-power ratios and perturbation correction factors required for reference ionization chamber dosimetry, as well as the fully realistic MC simulations currently available on clinical accelerators, detectors and patient treatment planning. Issues are raised that include the necessity for consistency in the data throughout the entire dosimetry chain in reference dosimetry, and how Bragg-Gray theory breaks down for small photon fields. Both aspects are less critical for MC treatment planning applications, but there are important constraints like tissue characterization and its patient-to-patient variability, which together with the conversion between dose-to-water and dose-to-tissue, are analysed in detail. Although these constraints are common to all methods and algorithms used in different types of treatment planning systems, they make uncertainties involved in MC treatment planning to still remain "uncertain".
Sharma, Diksha; Badano, Aldo
2013-03-01
hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Aspects of GPU perfomance in algorithms with random memory access
NASA Astrophysics Data System (ADS)
Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.
2017-10-01
The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.
Lee, Tae Kyu; Sandison, George A
2003-01-21
Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, chi, in the algorithm to be determined in advance of calculation.
The energy-dependent electron loss model: backscattering and application to heterogeneous slab media
NASA Astrophysics Data System (ADS)
Lee, Tae Kyu; Sandison, George A.
2003-01-01
Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, χ, in the algorithm to be determined in advance of calculation.
Beland, Laurent Karim; Osetskiy, Yury N.; Stoller, Roger E.; ...
2015-02-07
Here, we present a comparison of the Kinetic Activation–Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly Kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation–Relaxation Technique nouveau provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16,000-atom boxes. Generally speaking, k-ART’s treatment ofmore » geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC’s, while the later’s concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not accessible if not localizing calculations.« less
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...
2017-05-01
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
A Modified Monte Carlo Method for Carrier Transport in Germanium, Free of Isotropic Rates
NASA Astrophysics Data System (ADS)
Sundqvist, Kyle
2010-03-01
We present a new method for carrier transport simulation, relevant for high-purity germanium < 100 > at a temperature of 40 mK. In this system, the scattering of electrons and holes is dominated by spontaneous phonon emission. Free carriers are always out of equilibrium with the lattice. We must also properly account for directional effects due to band structure, but there are many cautions in the literature about treating germanium in particular. These objections arise because the germanium electron system is anisotropic to an extreme degree, while standard Monte Carlo algorithms maintain a reliance on isotropic, integrated rates. We re-examine Fermi's Golden Rule to produce a Monte Carlo method free of isotropic rates. Traditional Monte Carlo codes implement particle scattering based on an isotropically averaged rate, followed by a separate selection of the particle's final state via a momentum-dependent probability. In our method, the kernel of Fermi's Golden Rule produces analytical, bivariate rates which allow for the simultaneous choice of scatter and final state selection. Energy and momentum are automatically conserved. We compare our results to experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
Widesott, Lamberto; Lorentini, Stefano; Fracchiolla, Francesco; Farace, Paolo; Schwarz, Marco
2018-05-04
validation of a commercial Monte Carlo (MC) algorithm (RayStation ver6.0.024) for the treatment of brain tumours with pencil beam scanning (PBS) proton therapy, comparing it via measurements and analytical calculations in clinically realistic scenarios. Methods: For the measurements a 2D ion chamber array detector (MatriXX PT)) was placed underneath the following targets: 1) anthropomorphic head phantom (with two different thickness) and 2) a biological sample (i.e. half lamb's head). In addition, we compared the MC dose engine vs. the RayStation pencil beam (PB) algorithm clinically implemented so far, in critical conditions such as superficial targets (i.e. in need of range shifter), different air gaps and gantry angles to simulate both orthogonal and tangential beam arrangements. For every plan the PB and MC dose calculation were compared to measurements using a gamma analysis metrics (3%, 3mm). Results: regarding the head phantom the gamma passing rate (GPR) was always >96% and on average > 99% for the MC algorithm; PB algorithm had a GPR ≤90% for all the delivery configurations with single slab (apart 95 % GPR from gantry 0° and small air gap) and in case of two slabs of the head phantom the GPR was >95% only in case of small air gaps for all the three (0°, 45°,and 70°) simulated beam gantry angles. Overall the PB algorithm tends to overestimate the dose to the target (up to 25%) and underestimate the dose to the organ at risk (up to 30%). We found similar results (but a bit worse for PB algorithm) for the two targets of the lamb's head where only two beam gantry angles were simulated. Conclusions: our results suggest that in PBS proton therapy range shifter (RS) need to be used with extreme caution when planning the treatment with an analytical algorithm due to potentially great discrepancies between the planned dose and the dose delivered to the patients, also in case of brain tumours where this issue could be underestimated. Our results also suggest that a MC evaluation of the dose has to be performed every time the RS is used and, mostly, when it is used with large air gaps and beam directions tangential to the patient surface. . © 2018 Institute of Physics and Engineering in Medicine.
NASA Astrophysics Data System (ADS)
Anfimov, N.; Anosov, V.; Barth, J.; Chalyshev, V.; Chirikov-Zorin, I.; Dziewiecki, M.; Elsner, D.; Frolov, V.; Frommberger, F.; Guskov, A.; Hillert, W.; Klein, F.; Krumshteyn, Z.; Kurjata, R.; Marzec, J.; Nagaytsev, A.; Olchevski, A.; Orlov, I.; Rezinko, T.; Rybnikov, A.; Rychter, A.; Selyunin, A.; Zaremba, K.; Ziembicki, M.
2015-07-01
The array of 3 × 3 modules of the electromagnetic calorimeter ECAL0 of the COMPASS experiment at CERN has been tested with an electron beam of the ELSA (Germany) facility. The dependence of the response and the energy resolution of the calorimeter from the angle of incidence of the electron beam has been studied. A good agreement between the experimental data and the results of Monte Carlo simulation has been obtained. It will significantly expand the use of simulation to optimize event reconstruction algorithms.
On determination of charge transfer efficiency of thick, fully depleted CCDs with 55 Fe x-rays
Yates, D.; Kotov, I.; Nomerotski, A.
2017-07-01
Charge transfer efficiency (CTE) is one of the most important CCD characteristics. Our paper examines ways to optimize the algorithms used to analyze 55Fe x-ray data on the CCDs, as well as explores new types of observables for CTE determination that can be used for testing LSST CCDs. Furthermore, the observables are modeled employing simple Monte Carlo simulations to determine how the charge diffusion in thick, fully depleted silicon affects the measurement. The data is compared to the simulations for one of the observables, integral flux of the x-ray hit.
Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan
2018-02-01
Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.
Stochastic simulation and analysis of biomolecular reaction networks
Frazier, John M; Chushak, Yaroslav; Foy, Brent
2009-01-01
Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796
Continuous-time quantum Monte Carlo calculation of multiorbital vertex asymptotics
NASA Astrophysics Data System (ADS)
Kaufmann, Josef; Gunacker, Patrik; Held, Karsten
2017-07-01
We derive the equations for calculating the high-frequency asymptotics of the local two-particle vertex function for a multiorbital impurity model. These relate the asymptotics for a general local interaction to equal-time two-particle Green's functions, which we sample using continuous-time quantum Monte Carlo simulations with a worm algorithm. As specific examples we study the single-orbital Hubbard model and the three t2 g orbitals of SrVO3 within dynamical mean-field theory (DMFT). We demonstrate how the knowledge of the high-frequency asymptotics reduces the statistical uncertainties of the vertex and further eliminates finite-box-size effects. The proposed method benefits the calculation of nonlocal susceptibilities in DMFT and diagrammatic extensions of DMFT.
Cross-platform validation and analysis environment for particle physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.
A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for onlinemore » validation of Monte Carlo event samples through a web interface.« less
NASA Astrophysics Data System (ADS)
Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Ledoux, X.; Laurent, B.; Thomas, J.-C.; Clerc, T.; Desmezières, V.; Dupuis, M.; Madeline, A.; Dessay, E.; Grinyer, G. F.; Grinyer, J.; Menard, N.; Porée, F.; Achouri, L.; Delaunay, F.; Parlog, M.
2018-07-01
Double differential neutron spectra (energy, angle) originating from a thick natCu target bombarded by a 12 MeV/nucleon 36S16+ beam were measured by the activation method and the Time-of-flight technique at the Grand Accélérateur National d'Ions Lourds (GANIL). A neutron spectrum unfolding algorithm combining the SAND-II iterative method and Monte-Carlo techniques was developed for the analysis of the activation results that cover a wide range of neutron energies. It was implemented into a graphical user interface program, called GanUnfold. The experimental neutron spectra are compared to Monte-Carlo simulations performed using the PHITS and FLUKA codes.
Monte-Carlo simulation of a stochastic differential equation
NASA Astrophysics Data System (ADS)
Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG
2017-12-01
For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.
NASA Astrophysics Data System (ADS)
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-01
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.
Mesoscopic Simulations of Crosslinked Polymer Networks
NASA Astrophysics Data System (ADS)
Megariotis, Grigorios; Vogiatzis, Georgios G.; Schneider, Ludwig; Müller, Marcus; Theodorou, Doros N.
2016-08-01
A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1’4-polyisoprene’ is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn segments. From the thermodynamic point of view, the system is described by a Helmholtz free-energy containing contributions from entropic springs between successive beads along a chain, slip-springs representing entanglements between beads on different chains, and non-bonded interactions. The methodology is employed for the calculation of the stress relaxation function from simulations of several microseconds at equilibrium, as well as for the prediction of stress-strain curves of crosslinked polymer networks under deformation.
Nanothermodynamics Applied to Thermal Processes in Heterogeneous Materials
2012-08-03
models agree favorably with a wide range of measurements of local thermal and dynamic properties. Progress in understanding basic thermodynamic...Monte- Carlo (MC) simulations of the Ising model .7 The solid black lines in Fig. 4 show results using the uncorrected (Metropolis) algorithm on the...parameter g=0.5 (green, dash-dot), g=1 (black, solid ), and g=2 (blue, dash-dot-dot). Note the failure of the standard Ising model (g=0) to match
NASA Technical Reports Server (NTRS)
Stalnaker, Dale K.
1993-01-01
ACARA (Availability, Cost, and Resource Allocation) is a computer program which analyzes system availability, lifecycle cost (LCC), and resupply scheduling using Monte Carlo analysis to simulate component failure and replacement. This manual was written to: (1) explain how to prepare and enter input data for use in ACARA; (2) explain the user interface, menus, input screens, and input tables; (3) explain the algorithms used in the program; and (4) explain each table and chart in the output.
Triple collinear emissions in parton showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Höche, Stefan; Prestel, Stefan
2017-10-01
A framework to include triple collinear splitting functions into parton showers is presented, and the implementation of flavor-changing NLO splitting kernels is discussed as a first application. The correspondence between the Monte-Carlo integration and the analytic computation of NLO DGLAP evolution kernels is made explicit for both timelike and spacelike parton evolution. Numerical simulation results are obtained with two independent implementations of the new algorithm, using the two independent event generation frameworks Pythia and Sherpa.
NASA Astrophysics Data System (ADS)
Rambalakos, Andreas
Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.
NASA Astrophysics Data System (ADS)
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-01
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.
Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud
2015-10-21
Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.
Han, Bin; Xu, X. George; Chen, George T. Y.
2011-01-01
Purpose: Monte Carlo methods are used to simulate and optimize a time-resolved proton range telescope (TRRT) in localization of intrafractional and interfractional motions of lung tumor and in quantification of proton range variations. Methods: The Monte Carlo N-Particle eXtended (MCNPX) code with a particle tracking feature was employed to evaluate the TRRT performance, especially in visualizing and quantifying proton range variations during respiration. Protons of 230 MeV were tracked one by one as they pass through position detectors, patient 4DCT phantom, and finally scintillator detectors that measured residual ranges. The energy response of the scintillator telescope was investigated. Mass density and elemental composition of tissues were defined for 4DCT data. Results: Proton water equivalent length (WEL) was deduced by a reconstruction algorithm that incorporates linear proton track and lateral spatial discrimination to improve the image quality. 4DCT data for three patients were used to visualize and measure tumor motion and WEL variations. The tumor trajectories extracted from the WEL map were found to be within ∼1 mm agreement with direct 4DCT measurement. Quantitative WEL variation studies showed that the proton radiograph is a good representation of WEL changes from entrance to distal of the target. Conclusions:MCNPX simulation results showed that TRRT can accurately track the motion of the tumor and detect the WEL variations. Image quality was optimized by choosing proton energy, testing parameters of image reconstruction algorithm, and comparing to ground truth 4DCT. The future study will demonstrate the feasibility of using the time resolved proton radiography as an imaging tool for proton treatments of lung tumors. PMID:21626923
Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.
Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji
2015-12-01
We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.
Synchronous parallel spatially resolved stochastic cluster dynamics
Dunn, Aaron; Dingreville, Rémi; Martínez, Enrique; ...
2016-04-23
In this work, a spatially resolved stochastic cluster dynamics (SRSCD) model for radiation damage accumulation in metals is implemented using a synchronous parallel kinetic Monte Carlo algorithm. The parallel algorithm is shown to significantly increase the size of representative volumes achievable in SRSCD simulations of radiation damage accumulation. Additionally, weak scaling performance of the method is tested in two cases: (1) an idealized case of Frenkel pair diffusion and annihilation, and (2) a characteristic example problem including defect cluster formation and growth in α-Fe. For the latter case, weak scaling is tested using both Frenkel pair and displacement cascade damage.more » To improve scaling of simulations with cascade damage, an explicit cascade implantation scheme is developed for cases in which fast-moving defects are created in displacement cascades. For the first time, simulation of radiation damage accumulation in nanopolycrystals can be achieved with a three dimensional rendition of the microstructure, allowing demonstration of the effect of grain size on defect accumulation in Frenkel pair-irradiated α-Fe.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colón, Yamil J.; Gómez-Gualdrón, Diego A.; Snurr, Randall Q.
Metal-organic frameworks (MOFs) are promising materials for a range of energy and environmental applications. Here we describe in detail a computational algorithm and code to generate MOFs based on edge-transitive topological nets for subsequent evaluation via molecular simulation. This algorithm has been previously used by us to construct and evaluate 13 512 MOFs of 41 different topologies for cryo-adsorbed hydrogen storage. Grand canonical Monte Carlo simulations are used here to evaluate the 13 512 structures for the storage of gaseous fuels such as hydrogen and methane and nondistillative separation of xenon/krypton mixtures at various operating conditions. MOF performance for bothmore » gaseous fuel storage and xenon/krypton separation is influenced by topology. Simulation data suggest that gaseous fuel storage performance is topology-dependent due to MOF properties such as void fraction and surface area combining differently in different topologies, whereas xenon/krypton separation performance is topology-dependent due to how topology constrains the pore size distribution.« less
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-07
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy
NASA Astrophysics Data System (ADS)
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-01
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Mosaicing of airborne LiDAR bathymetry strips based on Monte Carlo matching
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Su, Dianpeng; Zhang, Kai; Ma, Yue; Wang, Mingwei; Yang, Anxiu
2017-09-01
This study proposes a new methodology for mosaicing airborne light detection and ranging (LiDAR) bathymetry (ALB) data based on Monte Carlo matching. Various errors occur in ALB data due to imperfect system integration and other interference factors. To account for these errors, a Monte Carlo matching algorithm based on a nonlinear least-squares adjustment model is proposed. First, the raw data of strip overlap areas were filtered according to their relative drift of depths. Second, a Monte Carlo model and nonlinear least-squares adjustment model were combined to obtain seven transformation parameters. Then, the multibeam bathymetric data were used to correct the initial strip during strip mosaicing. Finally, to evaluate the proposed method, the experimental results were compared with the results of the Iterative Closest Points (ICP) and three-dimensional Normal Distributions Transform (3D-NDT) algorithms. The results demonstrate that the algorithm proposed in this study is more robust and effective. When the quality of the raw data is poor, the Monte Carlo matching algorithm can still achieve centimeter-level accuracy for overlapping areas, which meets the accuracy of bathymetry required by IHO Standards for Hydrographic Surveys Special Publication No.44.
Broecker, Peter; Trebst, Simon
2016-12-01
In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.; COIN Collaboration
2015-11-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.
Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Qin; Jiang, Xiao-Guo
2012-09-01
To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator, the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data. As is known, the spectrum estimation from transmission data is an ill-conditioned problem. The method based on iterative perturbations is employed to derive the X-ray spectra, where initial guesses are used to start the process. This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function. In this work, various filter materials are put to use as the attenuator, and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated. The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.
An Uncertainty Quantification Framework for Remote Sensing Retrievals
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Hobbs, J.
2017-12-01
Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.
A simplified analytical random walk model for proton dose calculation
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.
NASA Astrophysics Data System (ADS)
Schulthess, Thomas C.
2013-03-01
The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.
Hybrid stochastic simulations of intracellular reaction-diffusion systems.
Kalantzis, Georgios
2009-06-01
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy
NASA Astrophysics Data System (ADS)
Siebers, J. V.; Keall, P. J.; Mohan, R.
The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.
Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.
Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H
2006-11-15
The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.
An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm
NASA Astrophysics Data System (ADS)
Jacques, Robert; McNutt, Todd
2014-03-01
Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
Equation-free multiscale computation: algorithms and applications.
Kevrekidis, Ioannis G; Samaey, Giovanni
2009-01-01
In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
NASA Astrophysics Data System (ADS)
Curotto, E.
2015-12-01
Structural optimizations, classical NVT ensemble, and variational Monte Carlo simulations of ion Stockmayer clusters parameterized to approximate the Li+(CH3NO2)n (n = 1-20) systems are performed. The Metropolis algorithm enhanced by the parallel tempering strategy is used to measure internal energies and heat capacities, and a parallel version of the genetic algorithm is employed to obtain the most important minima. The first solvation sheath is octahedral and this feature remains the dominant theme in the structure of clusters with n ≥ 6. The first "magic number" is identified using the adiabatic solvent dissociation energy, and it marks the completion of the second solvation layer for the lithium ion-nitromethane clusters. It corresponds to the n = 18 system, a solvated ion with the first sheath having octahedral symmetry, weakly bound to an eight-membered and a four-membered ring crowning a vertex of the octahedron. Variational Monte Carlo estimates of the adiabatic solvent dissociation energy reveal that quantum effects further enhance the stability of the n = 18 system relative to its neighbors.
Inverse Beta Decay Reconstruction in the Double Chooz Monte Carlo
NASA Astrophysics Data System (ADS)
Norrick, Anne
2010-02-01
The Double Chooz Experiment will search for neutrino oscillations using the ``Inverse Beta-Decay'' (IBD) interactions of electron antineutrinos from a nuclear reactor in Chooz, France. The experiment needs to isolate IBD events by detecting and reconstructing the positions and deposited energies of the outgoing positron and neutron. Methods for isolating this process will be described. In addition, results of simulation studies of two different reconstruction algorithms will be presented and their performances compared. )
Enhanced calculation of eigen-stress field and elastic energy in atomistic interdiffusion of alloys
NASA Astrophysics Data System (ADS)
Cecilia, José M.; Hernández-Díaz, A. M.; Castrillo, Pedro; Jiménez-Alonso, J. F.
2017-02-01
The structural evolution of alloys is affected by the elastic energy associated to eigen-stress fields. However, efficient calculations of the elastic energy in evolving geometries are actually a great challenge in promising atomistic simulation techniques such as Kinetic Monte Carlo (KMC) methods. In this paper, we report two complementary algorithms to calculate the eigen-stress field by linear superposition (a.k.a. LSA, Lineal Superposition Algorithm) and the elastic energy modification in atomistic interdiffusion of alloys (the Atom Exchange Elastic Energy Evaluation (AE4) Algorithm). LSA is shown to be appropriated for fast incremental stress calculation in highly nanostructured materials, whereas AE4 provides the required input for KMC and, additionally, it can be used to evaluate the accuracy of the eigen-stress field calculated by LSA. Consequently, they are suitable to be used on-the-fly with KMC. Both algorithms are massively parallel by their definition and thus well-suited for their parallelization on modern Graphics Processing Units (GPUs). Our computational studies confirm that we can obtain significant improvements compared to conventional Finite Element Methods, and the utilization of GPUs opens up new possibilities for the development of these methods in atomistic simulation of materials.
Choice of crystal surface finishing for a dual-ended readout depth-of-interaction (DOI) detector.
Fan, Peng; Ma, Tianyu; Wei, Qingyang; Yao, Rutao; Liu, Yaqiang; Wang, Shi
2016-02-07
The objective of this study was to choose the crystal surface finishing for a dual-ended readout (DER) DOI detector. Through Monte Carlo simulations and experimental studies, we evaluated 4 crystal surface finishing options as combinations of crystal surface polishing (diffuse or specular) and reflector (diffuse or specular) options on a DER detector. We also tested one linear and one logarithm DOI calculation algorithm. The figures of merit used were DOI resolution, DOI positioning error, and energy resolution. Both the simulation and experimental results show that (1) choosing a diffuse type in either surface polishing or reflector would improve DOI resolution but degrade energy resolution; (2) crystal surface finishing with a diffuse polishing combined with a specular reflector appears a favorable candidate with a good balance of DOI and energy resolution; and (3) the linear and logarithm DOI calculation algorithms show overall comparable DOI error, and the linear algorithm was better for photon interactions near the ends of the crystal while the logarithm algorithm was better near the center. These results provide useful guidance in DER DOI detector design in choosing the crystal surface finishing and DOI calculation methods.
New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.
2015-02-01
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.
Mode Shape Estimation Algorithms Under Ambient Conditions: A Comparative Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dosiek, Luke; Zhou, Ning; Pierre, John W.
Abstract—This paper provides a comparative review of five existing ambient electromechanical mode shape estimation algorithms, i.e., the Transfer Function (TF), Spectral, Frequency Domain Decomposition (FDD), Channel Matching, and Subspace Methods. It is also shown that the TF Method is a general approach to estimating mode shape and that the Spectral, FDD, and Channel Matching Methods are actually special cases of it. Additionally, some of the variations of the Subspace Method are reviewed and the Numerical algorithm for Subspace State Space System IDentification (N4SID) is implemented. The five algorithms are then compared using data simulated from a 17-machine model of themore » Western Electricity Coordinating Council (WECC) under ambient conditions with both low and high damping, as well as during the case where ambient data is disrupted by an oscillatory ringdown. The performance of the algorithms is compared using the statistics from Monte Carlo Simulations and results from measured WECC data, and a discussion of the practical issues surrounding their implementation, including cases where power system probing is an option, is provided. The paper concludes with some recommendations as to the appropriate use of the various techniques. Index Terms—Electromechanical mode shape, small-signal stability, phasor measurement units (PMU), system identification, N4SID, subspace.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
NASA Astrophysics Data System (ADS)
Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad
2017-12-01
This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.
Kwon, Sungchul; Kim, Yup
2013-01-01
We investigate epidemic spreading in annealed directed scale-free networks with the in-degree (k) distribution P(in)(k)~k(-γ(in)) and the out-degree (ℓ) distribution, P(out)(ℓ)~ℓ(-γ(out)). The correlation
Comparison of different tree sap flow up-scaling procedures using Monte-Carlo simulations
NASA Astrophysics Data System (ADS)
Tatarinov, Fyodor; Preisler, Yakir; Roahtyn, Shani; Yakir, Dan
2015-04-01
An important task in determining forest ecosystem water balance is the estimation of stand transpiration, allowing separating evapotranspiration into transpiration and soil evaporation. This can be based on up-scaling measurements of sap flow in representative trees (SF), which can be done by different mathematical algorithms. The aim of the present study was to evaluate the error associated with different up-scaling algorithms under different conditions. Other types of errors (such as, measurement error, within tree SF variability, choice of sample plot etc.) were not considered here. A set of simulation experiments using Monte-Carlo technique was carried out and three up-scaling procedures were tested. (1) Multiplying mean stand sap flux density based on unit sapwood cross-section area (SFD) by total sapwood area (Klein et al, 2014); (2) deriving of linear dependence of tree sap flow on tree DBH and calculating SFstand using predicted SF by DBH classes and stand DBH distribution (Cermak et al., 2004); (3) same as method 2 but using non-linear dependency. Simulations were performed under different SFD(DBH) slope (bs, positive, negative, zero); different DBH and SFD standard deviations (Δd and Δs, respectively) and DBH class size. It was assumed that all trees in a unit area are measured and the total SF of all trees in the experimental plot was taken as the reference SFstand value. Under negative bs all models tend to overestimate SFstand and the error increases exponentially with decreasing bs. Under bs >0 all models tend to underestimate SFstand, but the error is much smaller than for bs
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.
CatSim: a new computer assisted tomography simulation environment
NASA Astrophysics Data System (ADS)
De Man, Bruno; Basu, Samit; Chandra, Naveen; Dunham, Bruce; Edic, Peter; Iatrou, Maria; McOlash, Scott; Sainath, Paavana; Shaughnessy, Charlie; Tower, Brendon; Williams, Eugene
2007-03-01
We present a new simulation environment for X-ray computed tomography, called CatSim. CatSim provides a research platform for GE researchers and collaborators to explore new reconstruction algorithms, CT architectures, and X-ray source or detector technologies. The main requirements for this simulator are accurate physics modeling, low computation times, and geometrical flexibility. CatSim allows simulating complex analytic phantoms, such as the FORBILD phantoms, including boxes, ellipsoids, elliptical cylinders, cones, and cut planes. CatSim incorporates polychromaticity, realistic quantum and electronic noise models, finite focal spot size and shape, finite detector cell size, detector cross-talk, detector lag or afterglow, bowtie filtration, finite detector efficiency, non-linear partial volume, scatter (variance-reduced Monte Carlo), and absorbed dose. We present an overview of CatSim along with a number of validation experiments.
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH 3 to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state spacemore » for which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.« less
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Pendleton, Geoffrey N.; Paciesas, William S.; Mallozzi, Robert S.; Koshut, Tom M.; Fishman, Gerald J.; Meegan, Charles A.; Wilson, Robert B.; Horack, John M.; Lestrade, John Patrick
1995-01-01
The detector response matrices for the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory (CGRO) are described, including their creation and operation in data analysis. These response matrices are a detailed abstract representation of the gamma-ray detectors' operating characteristics that are needed for data analysis. They are constructed from an extensive set of calibration data coupled with a complex geometry electromagnetic cascade Monte Carlo simulation code. The calibration tests and simulation algorithm optimization are described. The characteristics of the BATSE detectors in the spacecraft environment are also described.
Modeling of skin cancer dermatoscopy images
NASA Astrophysics Data System (ADS)
Iralieva, Malica B.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.
2018-04-01
An early identified cancer is more likely to effective respond to treatment and has a less expensive treatment as well. Dermatoscopy is one of general diagnostic techniques for skin cancer early detection that allows us in vivo evaluation of colors and microstructures on skin lesions. Digital phantoms with known properties are required during new instrument developing to compare sample's features with data from the instrument. An algorithm for image modeling of skin cancer is proposed in the paper. Steps of the algorithm include setting shape, texture generation, adding texture and normal skin background setting. The Gaussian represents the shape, and then the texture generation based on a fractal noise algorithm is responsible for spatial chromophores distributions, while the colormap applied to the values corresponds to spectral properties. Finally, a normal skin image simulated by mixed Monte Carlo method using a special online tool is added as a background. Varying of Asymmetry, Borders, Colors and Diameter settings is shown to be fully matched to the ABCD clinical recognition algorithm. The asymmetry is specified by setting different standard deviation values of Gaussian in different parts of image. The noise amplitude is increased to set the irregular borders score. Standard deviation is changed to determine size of the lesion. Colors are set by colormap changing. The algorithm for simulating different structural elements is required to match with others recognition algorithms.
NASA Astrophysics Data System (ADS)
López-Coto, R.; Mazin, D.; Paoletti, R.; Blanch Bigas, O.; Cortina, J.
2016-04-01
Imaging atmospheric Cherenkov telescopes (IACTs) such as the Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) telescopes endeavor to reach the lowest possible energy threshold. In doing so the trigger system is a key element. Reducing the trigger threshold is hampered by the rapid increase of accidental triggers generated by ambient light (the so-called Night Sky Background NSB). In this paper we present a topological trigger, dubbed Topo-trigger, which rejects events on the basis of their relative orientation in the telescope cameras. We have simulated and tested the trigger selection algorithm in the MAGIC telescopes. The algorithm was tested using MonteCarlo simulations and shows a rejection of 85% of the accidental stereo triggers while preserving 99% of the gamma rays. A full implementation of this trigger system would achieve an increase in collection area between 10 and 20% at the energy threshold. The analysis energy threshold of the instrument is expected to decrease by ~ 8%. The selection algorithm was tested on real MAGIC data taken with the current trigger configuration and no γ-like events were found to be lost.
Viterbi equalization for long-distance, high-speed underwater laser communication
NASA Astrophysics Data System (ADS)
Hu, Siqi; Mi, Le; Zhou, Tianhua; Chen, Weibiao
2017-07-01
In long-distance, high-speed underwater laser communication, because of the strong absorption and scattering processes, the laser pulse is stretched with the increase in communication distance and the decrease in water clarity. The maximum communication bandwidth is limited by laser-pulse stretching. Improving the communication rate increases the intersymbol interference (ISI). To reduce the effect of ISI, the Viterbi equalization (VE) algorithm is used to estimate the maximum-likelihood receiving sequence. The Monte Carlo method is used to simulate the stretching of the received laser pulse and the maximum communication rate at a wavelength of 532 nm in Jerlov IB and Jerlov II water channels with communication distances of 80, 100, and 130 m, respectively. The high-data rate communication performance for the VE and hard-decision algorithms is compared. The simulation results show that the VE algorithm can be used to reduce the ISI by selecting the minimum error path. The trade-off between the high-data rate communication performance and minor bit-error rate performance loss makes VE a promising option for applications in long-distance, high-speed underwater laser communication systems.
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios
2017-02-01
Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.
SPARTAN: A High-Fidelity Simulation for Automated Rendezvous and Docking Applications
NASA Technical Reports Server (NTRS)
Turbe, Michael A.; McDuffie, James H.; DeKock, Brandon K.; Betts, Kevin M.; Carrington, Connie K.
2007-01-01
bd Systems (a subsidiary of SAIC) has developed the Simulation Package for Autonomous Rendezvous Test and ANalysis (SPARTAN), a high-fidelity on-orbit simulation featuring multiple six-degree-of-freedom (6DOF) vehicles. SPARTAN has been developed in a modular fashion in Matlab/Simulink to test next-generation automated rendezvous and docking guidance, navigation,and control algorithms for NASA's new Vision for Space Exploration. SPARTAN includes autonomous state-based mission manager algorithms responsible for sequencing the vehicle through various flight phases based on on-board sensor inputs and closed-loop guidance algorithms, including Lambert transfers, Clohessy-Wiltshire maneuvers, and glideslope approaches The guidance commands are implemented using an integrated translation and attitude control system to provide 6DOF control of each vehicle in the simulation. SPARTAN also includes high-fidelity representations of a variety of absolute and relative navigation sensors that maybe used for NASA missions, including radio frequency, lidar, and video-based rendezvous sensors. Proprietary navigation sensor fusion algorithms have been developed that allow the integration of these sensor measurements through an extended Kalman filter framework to create a single optimal estimate of the relative state of the vehicles. SPARTAN provides capability for Monte Carlo dispersion analysis, allowing for rigorous evaluation of the performance of the complete proposed AR&D system, including software, sensors, and mechanisms. SPARTAN also supports hardware-in-the-loop testing through conversion of the algorithms to C code using Real-Time Workshop in order to be hosted in a mission computer engineering development unit running an embedded real-time operating system. SPARTAN also contains both runtime TCP/IP socket interface and post-processing compatibility with bdStudio, a visualization tool developed by bd Systems, allowing for intuitive evaluation of simulation results. A description of the SPARTAN architecture and capabilities is provided, along with details on the models and algorithms utilized and results from representative missions.
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Ng, C M
2013-10-01
The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.
Mankodi, T K; Bhandarkar, U V; Puranik, B P
2017-08-28
A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.
NASA Astrophysics Data System (ADS)
García, M. F.; Restrepo-Parra, E.; Riaño-Rojas, J. C.
2015-05-01
This work develops a model that mimics the growth of diatomic, polycrystalline thin films by artificially splitting the growth into deposition and relaxation processes including two stages: (1) a grain-based stochastic method (grains orientation randomly chosen) is considered and by means of the Kinetic Monte Carlo method employing a non-standard version, known as Constant Time Stepping, the deposition is simulated. The adsorption of adatoms is accepted or rejected depending on the neighborhood conditions; furthermore, the desorption process is not included in the simulation and (2) the Monte Carlo method combined with the metropolis algorithm is used to simulate the diffusion. The model was developed by accounting for parameters that determine the morphology of the film, such as the growth temperature, the interacting atomic species, the binding energy and the material crystal structure. The modeled samples exhibited an FCC structure with grain formation with orientations in the family planes of < 111 >, < 200 > and < 220 >. The grain size and film roughness were analyzed. By construction, the grain size decreased, and the roughness increased, as the growth temperature increased. Although, during the growth process of real materials, the deposition and relaxation occurs simultaneously, this method may perhaps be valid to build realistic polycrystalline samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Diksha; Badano, Aldo
2013-03-15
Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests thatmore » hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.« less
Rotational Brownian Dynamics simulations of clathrin cage formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede
2014-08-14
The self-assembly of nearly rigid proteins into ordered aggregates is well suited for modeling by the patchy particle approach. Patchy particles are traditionally simulated using Monte Carlo methods, to study the phase diagram, while Brownian Dynamics simulations would reveal insights into the assembly dynamics. However, Brownian Dynamics of rotating anisotropic particles gives rise to a number of complications not encountered in translational Brownian Dynamics. We thoroughly test the Rotational Brownian Dynamics scheme proposed by Naess and Elsgaeter [Macromol. Theory Simul. 13, 419 (2004); Naess and Elsgaeter Macromol. Theory Simul. 14, 300 (2005)], confirming its validity. We then apply the algorithmmore » to simulate a patchy particle model of clathrin, a three-legged protein involved in vesicle production from lipid membranes during endocytosis. Using this algorithm we recover time scales for cage assembly comparable to those from experiments. We also briefly discuss the undulatory dynamics of the polyhedral cage.« less
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
SPIDERMAN: Fast code to simulate secondary transits and phase curves
NASA Astrophysics Data System (ADS)
Louden, Tom; Kreidberg, Laura
2017-11-01
SPIDERMAN calculates exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. The code uses a geometrical algorithm to solve exactly the area of sections of the disc of the planet that are occulted by the star. Approximately 1000 models can be generated per second in typical use, which makes making Markov Chain Monte Carlo analyses practicable. The code is modular and allows comparison of the effect of multiple different brightness distributions for a dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
Computational Fluid Dynamics: Algorithms and Supercomputers
1988-03-01
1985. 1.2. Pulliam, T., and Steger, J. , Implicit Finite Difference Simulations of Three Dimensional Compressible Flow, AIAA Journal , Vol. 18, No. 2...approaches infinity, assuming N is bounded. The question as to actual performance when M is finite and N varies, is a different matter. (Note: the CYBER...PARTICLE-IN-CELL 9i% 3.b7 j.48 WEATHER FORECAST 98% 3.77 3.55 SEISMIC MIGRATION 98% 3.85 3.45 MONTE CARLO 99% 3.85 3.75 LATTICE GAUGE 100% 4.00 3.77
Triple collinear emissions in parton showers
Hoche, Stefan; Prestel, Stefan
2017-10-17
A framework to include triple collinear splitting functions into parton showers is presented, and the implementation of flavor-changing next-to-leading-order (NLO) splitting kernels is discussed as a first application. The correspondence between the Monte Carlo integration and the analytic computation of NLO DGLAP evolution kernels is made explicit for both timelike and spacelike parton evolution. Finally, numerical simulation results are obtained with two independent implementations of the new algorithm, using the two independent event generation frameworks PYTHIA and SHERPA.
Cost-effectiveness of HIV and syphilis antenatal screening: a modeling study
Bristow, Claire C.; Larson, Elysia; Anderson, Laura J.; Klausner, Jeffrey D.
2016-01-01
Objectives The World Health Organization called for the elimination of maternal-to-child transmission (MTCT) of HIV and syphilis, a harmonized approach for the improvement of health outcomes for mothers and children. Testing early in pregnancy, treating seropositive pregnant women, and preventing syphilis re-infection can prevent MTCT of HIV and syphilis. We assessed the health and economic outcomes of a dual testing strategy in a simulated cohort of 100,000 antenatal care patients in Malawi. Methods We compared four screening algorithms: (1) HIV rapid test only, (2) dual HIV and syphilis rapid tests, (3) single rapid tests for HIV and syphilis, and (4) HIV rapid and syphilis laboratory tests. We calculated the expected number of adverse pregnancy outcomes, the expected costs, and the expected newborn disability adjusted life years (DALYs) for each screening algorithm. The estimated costs and DALYs for each screening algorithm were assessed from a societal perspective using Markov progression models. Additionally, we conducted a Monte Carlo multi-way sensitivity analysis, allowing for ranges of inputs. Results Our cohort decision model predicted the lowest number of adverse pregnancy outcomes in the dual HIV and syphilis rapid test strategy. Additionally, from the societal perspective, the costs of prevention and care using a dual HIV and syphilis rapid testing strategy was both the least costly ($226.92 per pregnancy) and resulted in the fewest DALYs (116,639) per 100,000 pregnancies. In the Monte Carlo simulation the dual HIV and syphilis algorithm was always cost saving and almost always reduced disability adjusted life years (DALYs) compared to HIV testing alone. Conclusion The results of the cost-effectiveness analysis showed that a dual HIV and syphilis test was cost saving compared to all other screening strategies. Adding dual rapid testing to the existing prevention of mother-to-child HIV transmission programs in Malawi and similar countries is likely to be advantageous. PMID:26920867
GPU accelerated population annealing algorithm
NASA Astrophysics Data System (ADS)
Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.
2017-11-01
Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature steps and multi-histogram reweighting. Additional comments: Code repository at https://github.com/LevBarash/PAising. The system size and size of the population of replicas are limited depending on the memory of the GPU device used. For the default parameter values used in the sample programs, L = 64, θ = 100, β0 = 0, βf = 1, Δβ = 0 . 005, R = 20 000, a typical run time on an NVIDIA Tesla K80 GPU is 151 seconds for the single spin coded (SSC) and 17 seconds for the multi-spin coded (MSC) program (see Section 2 for a description of these parameters).
NASA Astrophysics Data System (ADS)
Cruz Inclán, Carlos M.; González Lazo, Eduardo; Rodríguez Rodríguez, Arturo; Guzmán Martínez, Fernando; Abreu Alfonso, Yamiel; Piñera Hernández, Ibrahin; Leyva Fabelo, Antonio
2017-09-01
The present work deals with the numerical simulation of gamma and electron radiation damage processes under high brightness and radiation particle fluency on regard to two new radiation induced atom displacement processes, which concern with both, the Monte Carlo Method based numerical simulation of the occurrence of atom displacement process as a result of gamma and electron interactions and transport in a solid matrix and the atom displacement threshold energies calculated by Molecular Dynamic methodologies. The two new radiation damage processes here considered in the framework of high brightness and particle fluency irradiation conditions are: 1) The radiation induced atom displacement processes due to a single primary knockout atom excitation in a defective target crystal matrix increasing its defect concentrations (vacancies, interstitials and Frenkel pairs) as a result of a severe and progressive material radiation damage and 2) The occurrence of atom displacements related to multiple primary knockout atom excitations for the same or different atomic species in an perfect target crystal matrix due to subsequent electron elastic atomic scattering in the same atomic neighborhood during a crystal lattice relaxation time. In the present work a review numeral simulation attempts of these two new radiation damage processes are presented, starting from the former developed algorithms and codes for Monte Carlo simulation of atom displacements induced by electron and gamma in
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Experiences with Markov Chain Monte Carlo Convergence Assessment in Two Psychometric Examples
ERIC Educational Resources Information Center
Sinharay, Sandip
2004-01-01
There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the…
Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bates, Cameron Russell; Mckigney, Edward Allen
The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenbach, Markus; Li, Ying Wai
We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less