USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
Monte Carlo Simulation of Microscopic Stock Market Models
NASA Astrophysics Data System (ADS)
Stauffer, Dietrich
Computer simulations with random numbers, that is, Monte Carlo methods, have been considerably applied in recent years to model the fluctuations of stock market or currency exchange rates. Here we concentrate on the percolation model of Cont and Bouchaud, to simulate, not to predict, the market behavior.
Monte Carlo simulations in X-ray imaging
NASA Astrophysics Data System (ADS)
Giersch, Jürgen; Durst, Jürgen
2008-06-01
Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
ERIC Educational Resources Information Center
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
NASA Astrophysics Data System (ADS)
Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio
2016-09-01
A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Sechopoulos, Ioannis; Rogers, D W O; Bazalova-Carter, Magdalena; Bolch, Wesley E; Heath, Emily C; McNitt-Gray, Michael F; Sempau, Josep; Williamson, Jeffrey F
2018-01-01
Studies involving Monte Carlo simulations are common in both diagnostic and therapy medical physics research, as well as other fields of basic and applied science. As with all experimental studies, the conditions and parameters used for Monte Carlo simulations impact their scope, validity, limitations, and generalizability. Unfortunately, many published peer-reviewed articles involving Monte Carlo simulations do not provide the level of detail needed for the reader to be able to properly assess the quality of the simulations. The American Association of Physicists in Medicine Task Group #268 developed guidelines to improve reporting of Monte Carlo studies in medical physics research. By following these guidelines, manuscripts submitted for peer-review will include a level of relevant detail that will increase the transparency, the ability to reproduce results, and the overall scientific value of these studies. The guidelines include a checklist of the items that should be included in the Methods, Results, and Discussion sections of manuscripts submitted for peer-review. These guidelines do not attempt to replace the journal reviewer, but rather to be a tool during the writing and review process. Given the varied nature of Monte Carlo studies, it is up to the authors and the reviewers to use this checklist appropriately, being conscious of how the different items apply to each particular scenario. It is envisioned that this list will be useful both for authors and for reviewers, to help ensure the adequate description of Monte Carlo studies in the medical physics literature. © 2017 American Association of Physicists in Medicine.
Monte Carlo Simulations of Radiative and Neutrino Transport under Astrophysical Conditions
NASA Astrophysics Data System (ADS)
Krivosheyev, Yu. M.; Bisnovatyi-Kogan, G. S.
2018-05-01
Monte Carlo simulations are utilized to model radiative and neutrino transfer in astrophysics. An algorithm that can be used to study radiative transport in astrophysical plasma based on simulations of photon trajectories in a medium is described. Formation of the hard X-ray spectrum of the Galactic microquasar SS 433 is considered in detail as an example. Specific requirements for applying such simulations to neutrino transport in a densemedium and algorithmic differences compared to its application to photon transport are discussed.
Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave
NASA Astrophysics Data System (ADS)
Yasuda, Shugo
2017-02-01
A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Moradmand Jalali, Hamed; Bashiri, Hadis; Rasa, Hossein
2015-05-01
In the present study, the mechanism of free radical production by light-reflective agents in sunscreens (TiO2, ZnO and ZrO2) was obtained by applying kinetic Monte Carlo simulation. The values of the rate constants for each step of the suggested mechanism have been obtained by simulation. The effect of the initial concentration of mineral oxides and uric acid on the rate of uric acid photo-oxidation by irradiation of some sun care agents has been studied. The kinetic Monte Carlo simulation results agree qualitatively with the existing experimental data for the production of free radicals by sun care agents. Copyright © 2015 Elsevier B.V. All rights reserved.
E Pluribus Analysis: Applying a Superforecasting Methodology to the Detection of Homegrown Violence
2018-03-01
actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique, which...actor violence and a set of predefined decision-making protocols. This research included running four simulations using the Monte Carlo technique...PREDICTING RANDOMNESS.............................................................24 1. Using a “ Runs Test” to Determine a Temporal Pattern in Lone
Anthology of the Development of Radiation Transport Tools as Applied to Single Event Effects
NASA Astrophysics Data System (ADS)
Reed, R. A.; Weller, R. A.; Akkerman, A.; Barak, J.; Culpepper, W.; Duzellier, S.; Foster, C.; Gaillardin, M.; Hubert, G.; Jordan, T.; Jun, I.; Koontz, S.; Lei, F.; McNulty, P.; Mendenhall, M. H.; Murat, M.; Nieminen, P.; O'Neill, P.; Raine, M.; Reddell, B.; Saigné, F.; Santin, G.; Sihver, L.; Tang, H. H. K.; Truscott, P. R.; Wrobel, F.
2013-06-01
This anthology contains contributions from eleven different groups, each developing and/or applying Monte Carlo-based radiation transport tools to simulate a variety of effects that result from energy transferred to a semiconductor material by a single particle event. The topics span from basic mechanisms for single-particle induced failures to applied tasks like developing websites to predict on-orbit single event failure rates using Monte Carlo radiation transport tools.
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.
Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A
2005-01-01
The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.
NASA Astrophysics Data System (ADS)
Crevillén-García, D.; Power, H.
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Power, H.
2017-01-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing
ERIC Educational Resources Information Center
Raiche, Gilles; Blais, Jean-Guy
2006-01-01
Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…
Path integral Monte Carlo and the electron gas
NASA Astrophysics Data System (ADS)
Brown, Ethan W.
Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Montemore » Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.« less
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.
Boda, Dezső; Gillespie, Dirk
2012-03-13
We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.
Monte Carlo simulation of ò ó coincidence system using plastic scintillators in 4àgeometry
NASA Astrophysics Data System (ADS)
Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.
2007-09-01
A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.
Teaching Reinforcement of Stochastic Behavior Using Monte Carlo Simulation.
ERIC Educational Resources Information Center
Fox, William P.; And Others
1996-01-01
Explains a proposed block of instruction that would give students in industrial engineering, operations research, systems engineering, and applied mathematics the basic understanding required to begin more advanced courses in simulation theory or applications. (DDR)
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Gutzwiller Monte Carlo approach for a critical dissipative spin model
NASA Astrophysics Data System (ADS)
Casteels, Wim; Wilson, Ryan M.; Wouters, Michiel
2018-06-01
We use the Gutzwiller Monte Carlo approach to simulate the dissipative X Y Z model in the vicinity of a dissipative phase transition. This approach captures classical spatial correlations together with the full on-site quantum behavior while neglecting nonlocal quantum effects. By considering finite two-dimensional lattices of various sizes, we identify a ferromagnetic and two paramagnetic phases, in agreement with earlier studies. The greatly reduced numerical complexity of the Gutzwiller Monte Carlo approach facilitates efficient simulation of relatively large lattice sizes. The inclusion of the spatial correlations allows to capture parts of the phase diagram that are completely missed by the widely applied Gutzwiller decoupling of the density matrix.
Energy broadening in electron beams: A comparison of existing theories and Monte Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jansen, G.H.; Groves, T.R.; Stickel, W.
1985-01-01
Different theories on the Boersch effect are applied to a simple beam geometry with one crossover in drift space.The results are compared with each other, with Monte Carlo simulations, and with the experiment. The most complete and accurate theory is given by van Leeuwen and Jansen. This theory predicts energy spreads within 10% of the Monte Carlo results for operating conditions usually given in systems with thermionic emission sources. No comprehensive theory, however, of energy broadening in electron guns has yet been presented. Nevertheless, the theory of van Leeuwen and Jansen was found to predict the experimental values by trendmore » and within a factor of 2.« less
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
Monte-Carlo background simulations of present and future detectors in x-ray astronomy
NASA Astrophysics Data System (ADS)
Tenzer, C.; Kendziorra, E.; Santangelo, A.
2008-07-01
Reaching a low-level and well understood internal instrumental background is crucial for the scientific performance of an X-ray detector and, therefore, a main objective of the instrument designers. Monte-Carlo simulations of the physics processes and interactions taking place in a space-based X-ray detector as a result of its orbital environment can be applied to explain the measured background of existing missions. They are thus an excellent tool to predict and optimize the background of future observatories. Weak points of a design and the main sources of the background can be identified and methods to reduce them can be implemented and studied within the simulations. Using the Geant4 Monte-Carlo toolkit, we have created a simulation environment for space-based detectors and we present results of such background simulations for XMM-Newton's EPIC pn-CCD camera. The environment is also currently used to estimate and optimize the background of the future instruments Simbol-X and eRosita.
NASA Astrophysics Data System (ADS)
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A.
2017-12-01
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A
2017-12-28
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
NASA Astrophysics Data System (ADS)
Urbic, T.; Holovko, M. F.
2011-10-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.
Urbic, T.; Holovko, M. F.
2011-01-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes–Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334
Monte Carlo simulations of neutron-scattering instruments using McStas
NASA Astrophysics Data System (ADS)
Nielsen, K.; Lefmann, K.
2000-06-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.
2015-09-01
direction, so if the simulation domain is set to be a certain size, then that presents a hard ceiling on the thickness of a film that may be grown in...FFA, Los J, Cuppen HM, Bennema P, Meekes H. MONTY: Monte Carlo crystal growth on any crystal structure in any crystallographic orientation...mhoffman.github.io/kmos/. 23. Kiravittaya S, Schmidt OG. Quantum-dot crystal defects. Applied Physics Letters. 2008;93:173109. 24. Leetmaa M
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.
1992-01-01
Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.
A kinetic Monte Carlo approach to diffusion-controlled thermal desorption spectroscopy
NASA Astrophysics Data System (ADS)
Schablitzki, T.; Rogal, J.; Drautz, R.
2017-06-01
Atomistic simulations of thermal desorption spectra for effusion from bulk materials to characterize binding or trapping sites are a challenging task as large system sizes as well as extended time scales are required. Here, we introduce an approach where we combine kinetic Monte Carlo with an analytic approximation of the superbasins within the framework of absorbing Markov chains. We apply our approach to the effusion of hydrogen from BCC iron, where the diffusion within bulk grains is coarse grained using absorbing Markov chains, which provide an exact solution of the dynamics within a superbasin. Our analytic approximation to the superbasin is transferable with respect to grain size and elliptical shapes and can be applied in simulations with constant temperature as well as constant heating rate. The resulting thermal desorption spectra are in close agreement with direct kinetic Monte Carlo simulations, but the calculations are computationally much more efficient. Our approach is thus applicable to much larger system sizes and provides a first step towards an atomistic understanding of the influence of structural features on the position and shape of peaks in thermal desorption spectra. This article is part of the themed issue 'The challenges of hydrogen and metals'.
Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources
NASA Astrophysics Data System (ADS)
Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi
2017-01-01
Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.
NASA Astrophysics Data System (ADS)
Bouachraoui, Rachid; El Hachimi, Abdel Ghafour; Ziat, Younes; Bahmad, Lahoucine; Tahiri, Najim
2018-06-01
Electronic and magnetic properties of hexagonal Iron (II) Sulfide (hexagonal FeS) have been investigated by combining the Density functional theory (DFT) and Monte Carlo simulations (MCS). This compound is constituted by magnetic hexagonal lattice occupied by Fe2+ with spin state (S = 2). Based on ab initio method, we calculated the exchange coupling JFe-Fe between two magnetic atoms Fe-Fe in different directions. Also phase transitions, magnetic stability and magnetizations have been investigated in the framework of Monte Carlo simulations. Within this method, a second phase transition is observed at the Néel temperature TN = 450 K. This finding in good agreement with the reported data in the literature. The effect of the applied different parameters showed how can these parameters affect the critical temperature of this system. Moreover, we studied the density of states and found that the hexagonal FeS will be a promoting material for spintronic applications.
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2018-03-01
The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.
Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G
2006-01-01
The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.
Urbic, T; Holovko, M F
2011-10-07
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. © 2011 American Institute of Physics
Four receptor-oriented source apportionment models were evaluated by applying them to simulated personal exposure data for select volatile organic compounds (VOCs) that were generated by Monte Carlo sampling from known source contributions and profiles. The exposure sources mo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zillich, Robert E., E-mail: robert.zillich@jku.at
2015-11-15
We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbativelymore » to a 2nd-order simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less
Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.
Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark
2010-05-01
We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.
Massively parallel multicanonical simulations
NASA Astrophysics Data System (ADS)
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes
NASA Astrophysics Data System (ADS)
Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.
2017-11-01
The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.
Can we approach the gas-liquid critical point using slab simulations of two coexisting phases?
Goujon, Florent; Ghoufi, Aziz; Malfreyt, Patrice; Tildesley, Dominic J
2016-09-28
In this paper, we demonstrate that it is possible to approach the gas-liquid critical point of the Lennard-Jones fluid by performing simulations in a slab geometry using a cut-off potential. In the slab simulation geometry, it is essential to apply an accurate tail correction to the potential energy, applied during the course of the simulation, to study the properties of states close to the critical point. Using the Janeček slab-based method developed for two-phase Monte Carlo simulations [J. Janec̆ek, J. Chem. Phys. 131, 6264 (2006)], the coexisting densities and surface tension in the critical region are reported as a function of the cutoff distance in the intermolecular potential. The results obtained using slab simulations are compared with those obtained using grand canonical Monte Carlo simulations of isotropic systems and the finite-size scaling techniques. There is a good agreement between these two approaches. The two-phase simulations can be used in approaching the critical point for temperatures up to 0.97 T C ∗ (T ∗ = 1.26). The critical-point exponents describing the dependence of the density, surface tension, and interfacial thickness on the temperature are calculated near the critical point.
Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G
2013-01-01
In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation.
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
NASA Astrophysics Data System (ADS)
Lee, Taewoong; Lee, Hyounggun; Lee, Wonho
2015-10-01
This study evaluated the use of Compton imaging technology to monitor prompt gamma rays emitted by 10B in boron neutron capture therapy (BNCT) applied to a computerized human phantom. The Monte Carlo method, including particle-tracking techniques, was used for simulation. The distribution of prompt gamma rays emitted by the phantom during irradiation with neutron beams is closely associated with the distribution of the boron in the phantom. Maximum likelihood expectation maximization (MLEM) method was applied to the information obtained from the detected prompt gamma rays to reconstruct the distribution of the tumor including the boron uptake regions (BURs). The reconstructed Compton images of the prompt gamma rays were combined with the cross-sectional images of the human phantom. Quantitative analysis of the intensity curves showed that all combined images matched the predetermined conditions of the simulation. The tumors including the BURs were distinguishable if they were more than 2 cm apart.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Patra, Chandra N
2014-11-14
A systematic investigation of the spherical electric double layers with the electrolytes having size as well as charge asymmetry is carried out using density functional theory and Monte Carlo simulations. The system is considered within the primitive model, where the macroion is a structureless hard spherical colloid, the small ions as charged hard spheres of different size, and the solvent is represented as a dielectric continuum. The present theory approximates the hard sphere part of the one particle correlation function using a weighted density approach whereas a perturbation expansion around the uniform fluid is applied to evaluate the ionic contribution. The theory is in quantitative agreement with Monte Carlo simulation for the density and the mean electrostatic potential profiles over a wide range of electrolyte concentrations, surface charge densities, valence of small ions, and macroion sizes. The theory provides distinctive evidence of charge and size correlations within the electrode-electrolyte interface in spherical geometry.
Kinetic Monte Carlo Method for Rule-based Modeling of Biochemical Networks
Yang, Jin; Monine, Michael I.; Faeder, James R.; Hlavacek, William S.
2009-01-01
We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena. PMID:18851068
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Verification
NASA Technical Reports Server (NTRS)
Hanson, John M.; Beard, Bernard B.
2010-01-01
This paper is focused on applying Monte Carlo simulation to probabilistic launch vehicle design and requirements verification. The approaches developed in this paper can be applied to other complex design efforts as well. Typically the verification must show that requirement "x" is met for at least "y" % of cases, with, say, 10% consumer risk or 90% confidence. Two particular aspects of making these runs for requirements verification will be explored in this paper. First, there are several types of uncertainties that should be handled in different ways, depending on when they become known (or not). The paper describes how to handle different types of uncertainties and how to develop vehicle models that can be used to examine their characteristics. This includes items that are not known exactly during the design phase but that will be known for each assembled vehicle (can be used to determine the payload capability and overall behavior of that vehicle), other items that become known before or on flight day (can be used for flight day trajectory design and go/no go decision), and items that remain unknown on flight day. Second, this paper explains a method (order statistics) for determining whether certain probabilistic requirements are met or not and enables the user to determine how many Monte Carlo samples are required. Order statistics is not new, but may not be known in general to the GN&C community. The methods also apply to determining the design values of parameters of interest in driving the vehicle design. The paper briefly discusses when it is desirable to fit a distribution to the experimental Monte Carlo results rather than using order statistics.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Gentile, Nick
This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy andmore » performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.« less
Computer simulation studies of the growth of strained layers by molecular-beam epitaxy
NASA Astrophysics Data System (ADS)
Faux, D. A.; Gaynor, G.; Carson, C. L.; Hall, C. K.; Bernholc, J.
1990-08-01
Two new types of discrete-space Monte Carlo computer simulation are presented for the modeling of the early stages of strained-layer growth by molecular-beam epitaxy. The simulations are more economical on computer resources than continuous-space Monte Carlo or molecular dynamics. Each model is applied to the study of growth onto a substrate in two dimensions with use of Lennard-Jones interatomic potentials. Up to seven layers are deposited for a variety of lattice mismatches, temperatures, and growth rates. Both simulations give similar results. At small lattice mismatches (<~4%) the growth is in registry with the substrate, while at high mismatches (>~6%) the growth is incommensurate with the substrate. At intermediate mismatches, a transition from registered to incommensurate growth is observed which commences at the top of the crystal and propagates down to the first layer. Faster growth rates are seen to inhibit this transition. The growth mode is van der Merwe (layer-by-layer) at 2% lattice mismatch, but at larger mismatches Volmer-Weber (island) growth is preferred. The Monte Carlo simulations are assessed in the light of these results and the ease at which they can be extended to three dimensions and to more sophisticated potentials is discussed.
State-to-state models of vibrational relaxation in Direct Simulation Monte Carlo (DSMC)
NASA Astrophysics Data System (ADS)
Oblapenko, G. P.; Kashkovsky, A. V.; Bondar, Ye A.
2017-02-01
In the present work, the application of state-to-state models of vibrational energy exchanges to the Direct Simulation Monte Carlo (DSMC) is considered. A state-to-state model for VT transitions of vibrational energy in nitrogen and oxygen, based on the application of the inverse Laplace transform to results of quasiclassical trajectory calculations (QCT) of vibrational energy transitions, along with the Forced Harmonic Oscillator (FHO) state-to-state model is implemented in DSMC code and applied to flows around blunt bodies. Comparisons are made with the widely used Larsen-Borgnakke model and the in uence of multi-quantum VT transitions is assessed.
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
Bahreyni Toossi, Mohammad Taghi; Momennezhad, Mehdi; Hashemi, Seyed Mohammad
2012-01-01
Aim Exact knowledge of dosimetric parameters is an essential pre-requisite of an effective treatment in radiotherapy. In order to fulfill this consideration, different techniques have been used, one of which is Monte Carlo simulation. Materials and methods This study used the MCNP-4Cb to simulate electron beams from Neptun 10 PC medical linear accelerator. Output factors for 6, 8 and 10 MeV electrons applied to eleven different conventional fields were both measured and calculated. Results The measurements were carried out by a Wellhofler-Scanditronix dose scanning system. Our findings revealed that output factors acquired by MCNP-4C simulation and the corresponding values obtained by direct measurements are in a very good agreement. Conclusion In general, very good consistency of simulated and measured results is a good proof that the goal of this work has been accomplished. PMID:24377010
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
Dosimetric verification of IMRT treatment planning using Monte Carlo simulations for prostate cancer
NASA Astrophysics Data System (ADS)
Yang, J.; Li, J.; Chen, L.; Price, R.; McNeeley, S.; Qin, L.; Wang, L.; Xiong, W.; Ma, C.-M.
2005-03-01
The purpose of this work is to investigate the accuracy of dose calculation of a commercial treatment planning system (Corvus, Normos Corp., Sewickley, PA). In this study, 30 prostate intensity-modulated radiotherapy (IMRT) treatment plans from the commercial treatment planning system were recalculated using the Monte Carlo method. Dose-volume histograms and isodose distributions were compared. Other quantities such as minimum dose to the target (Dmin), the dose received by 98% of the target volume (D98), dose at the isocentre (Diso), mean target dose (Dmean) and the maximum critical structure dose (Dmax) were also evaluated based on our clinical criteria. For coplanar plans, the dose differences between Monte Carlo and the commercial treatment planning system with and without heterogeneity correction were not significant. The differences in the isocentre dose between the commercial treatment planning system and Monte Carlo simulations were less than 3% for all coplanar cases. The differences on D98 were less than 2% on average. The differences in the mean dose to the target between the commercial system and Monte Carlo results were within 3%. The differences in the maximum bladder dose were within 3% for most cases. The maximum dose differences for the rectum were less than 4% for all the cases. For non-coplanar plans, the difference in the minimum target dose between the treatment planning system and Monte Carlo calculations was up to 9% if the heterogeneity correction was not applied in Corvus. This was caused by the excessive attenuation of the non-coplanar beams by the femurs. When the heterogeneity correction was applied in Corvus, the differences were reduced significantly. These results suggest that heterogeneity correction should be used in dose calculation for prostate cancer with non-coplanar beam arrangements.
Computation of Neutral Gas Flow from a Hall Thruster into a Vacuum Chamber
2002-10-18
try to quantify these effects, the direct simulation Monte Carlo method is applied to model a cold flow of xenon gas expanding from a Hall thruster into...a vacuum chamber. The simulations are performed for the P5 Hall thruster operating in a large vacuum tank at the University of Michigan. Comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez, Liliana; Slutzky, Claudia; Ferron, Julio
2005-06-15
The explosive features of highly cohesive materials growing over soft metal substrates can be suppressed, and layer by layer growth achieved, by controlling a slight alloying at the interface. By means of Monte Carlo and molecular dynamic simulations, applied to the Co/Cu(111) system, we have proved the stability reached by Co growing over an alloyed CoCu surface, and the factibility of using hyperthermal ions to tailor this alloying.
Evaluation of effective dose with chest digital tomosynthesis system using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kim, Dohyeon; Jo, Byungdu; Lee, Youngjin; Park, Su-Jin; Lee, Dong-Hoon; Kim, Hee-Joung
2015-03-01
Chest digital tomosynthesis (CDT) system has recently been introduced and studied. This system offers the potential to be a substantial improvement over conventional chest radiography for the lung nodule detection and reduces the radiation dose with limited angles. PC-based Monte Carlo program (PCXMC) simulation toolkit (STUK, Helsinki, Finland) is widely used to evaluate radiation dose in CDT system. However, this toolkit has two significant limits. Although PCXMC is not possible to describe a model for every individual patient and does not describe the accurate X-ray beam spectrum, Geant4 Application for Tomographic Emission (GATE) simulation describes the various size of phantom for individual patient and proper X-ray spectrum. However, few studies have been conducted to evaluate effective dose in CDT system with the Monte Carlo simulation toolkit using GATE. The purpose of this study was to evaluate effective dose in virtual infant chest phantom of posterior-anterior (PA) view in CDT system using GATE simulation. We obtained the effective dose at different tube angles by applying dose actor function in GATE simulation which was commonly used to obtain the medical radiation dosimetry. The results indicated that GATE simulation was useful to estimate distribution of absorbed dose. Consequently, we obtained the acceptable distribution of effective dose at each projection. These results indicated that GATE simulation can be alternative method of calculating effective dose in CDT applications.
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
Schaefer, C; Jansen, A P J
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
Zhang, Weihong; Howell, Steven C; Wright, David W; Heindel, Andrew; Qiu, Xiangyun; Chen, Jianhan; Curtis, Joseph E
2017-05-01
We describe a general method to use Monte Carlo simulation followed by torsion-angle molecular dynamics simulations to create ensembles of structures to model a wide variety of soft-matter biological systems. Our particular emphasis is focused on modeling low-resolution small-angle scattering and reflectivity structural data. We provide examples of this method applied to HIV-1 Gag protein and derived fragment proteins, TraI protein, linear B-DNA, a nucleosome core particle, and a glycosylated monoclonal antibody. This procedure will enable a large community of researchers to model low-resolution experimental data with greater accuracy by using robust physics based simulation and sampling methods which are a significant improvement over traditional methods used to interpret such data. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.
2017-12-01
We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.
NASA Astrophysics Data System (ADS)
Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.
2018-05-01
A hybrid Landau Lifshitz Gilbert/kinetic Monte Carlo algorithm is used to simulate experimental magnetic hysteresis loops for dual layer exchange coupled composite media. The calculation of the rate coefficients and difficulties arising from low energy barriers, a fundamental problem of the kinetic Monte Carlo method, are discussed and the methodology used to treat them in the present work is described. The results from simulations are compared with experimental vibrating sample magnetometer measurements on dual layer CoPtCrB/CoPtCrSiO media and a quantitative relationship between the thickness of the exchange control layer separating the layers and the effective exchange constant between the layers is obtained. Estimates of the energy barriers separating magnetically reversed states of the individual grains in zero applied field as well as the saturation field at sweep rates relevant to the bit write speeds in magnetic recording are also presented. The significance of this comparison between simulations and experiment and the estimates of the material parameters obtained from it are discussed in relation to optimizing the performance of magnetic storage media.
NASA Technical Reports Server (NTRS)
Gayda, J.; Srolovitz, D. J.
1989-01-01
This paper presents a specialized microstructural lattice model, MCFET (Monte Carlo finite element technique), which simulates microstructural evolution in materials in which strain energy has an important role in determining morphology. The model is capable of accounting for externally applied stress, surface tension, misfit, elastic inhomogeneity, elastic anisotropy, and arbitrary temperatures. The MCFET analysis was found to compare well with the results of analytical calculations of the equilibrium morphologies of isolated particles in an infinite matrix.
Integrated Multiscale Modeling of Molecular Computing Devices. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tim Schulze
2012-11-01
The general theme of this research has been to expand the capabilities of a simulation technique, Kinetic Monte Carlo (KMC) and apply it to study self-assembled nano-structures on epitaxial thin films. KMC simulates thin film growth and evolution by replacing the detailed dynamics of the system's evolution, which might otherwise be studied using molecular dynamics, with an appropriate stochastic process.
NASA Astrophysics Data System (ADS)
Tylko, Grzegorz; Dubchak, Sergyi; Banach, Zuzanna; Turnau, Katarzyna
2010-04-01
Monte Carlo simulations of gelatin matrices with known elemental concentrations confirmed the suitability of protein standards to quantify elements of cellulose material in x-ray microanalysis. However, gelatin standards and cellulose plant cell walls differ in structure, what influences x-ray generation and emission in both specimens. The goal of the project was to establish the influence of gelatin structure on x-ray generation and its usefulness to calculate elemental concentrations in plant cell walls of different width. Roots of Medicago truncatula as well as gelatin standards with known elemental composition were prepared according to freeze-drying protocols. The thermanox polymer was chosen to establish background formation for flat and compact organic materials. All analyses were performed with the scanning electron microscope operated at 10 keV and probe current of 350 pA. The Monte Carlo code Casino was applied to calculate the intensities of the generated and the emitted x-rays from biological matrix of different width. No topography effects of gelatin structure were visible when the raster mode of electron impact was applied to the specimen. Monte Carlo simulations of gelatin of different width revealed that a significant decrease of the generated x-ray intensity appears at the width of the specimen around 3.5 μm. However, an increase of emission of low energy x-ray intensities (Na, Mg) was noted at 3.5 μm size with constant emission of higher energy x-rays (Cl, K) down to 2.5 μm width. It determines the minimal size of plant specimen useful for comparison to bulk gelatin standard when quantitative analysis is performed for biologically important elements.
Hunt, J G; Watchman, C J; Bolch, W E
2007-01-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.
Ogawara, R; Ishikawa, M
2016-07-01
The anode pulse of a photomultiplier tube (PMT) coupled with a scintillator is used for pulse shape discrimination (PSD) analysis. We have developed a novel emulation technique for the PMT anode pulse based on optical photon transport and a PMT response function. The photon transport was calculated using Geant4 Monte Carlo code and the response function with a BC408 organic scintillator. The obtained percentage RMS value of the difference between the measured and simulated pulse with suitable scintillation properties using GSO:Ce (0.4, 1.0, 1.5 mol%), LaBr3:Ce and BGO scintillators were 2.41%, 2.58%, 2.16%, 2.01%, and 3.32%, respectively. The proposed technique demonstrates high reproducibility of the measured pulse and can be applied to simulation studies of various radiation measurements.
Monte-Carlo simulation of defect-cluster nucleation in metals during irradiation
NASA Astrophysics Data System (ADS)
Nakasuji, Toshiki; Morishita, Kazunori; Ruan, Xiaoyong
2017-02-01
A multiscale modeling approach was applied to investigate the nucleation process of CRPs (copper rich precipitates, i.e., copper-vacancy clusters) in α-Fe containing 1 at.% Cu during irradiation. Monte-Carlo simulations were performed to investigate the nucleation process, with the rate theory equation analysis to evaluate the concentration of displacement defects, along with the molecular dynamics technique to know CRP thermal stabilities in advance. Our MC simulations showed that there is long incubation period at first, followed by a rapid growth of CRPs. The incubation period depends on irradiation conditions such as the damage rate and temperature. CRP's composition during nucleation varies with time. The copper content of CRPs shows relatively rich at first, and then becomes poorer as the precipitate size increases. A widely-accepted model of CRP nucleation process is finally proposed.
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here before performing his/her own novel research. In addition, an investigator entering the field of Monte Carlo simulations can use these descriptions and results as a self-teaching tool to ensure that he/she is able to perform a specific simulation correctly. Finally, educators can assign these cases as learning projects as part of course objectives or training programs.
Toltz, Allison; Hoesl, Michaela; Schuemann, Jan; Seuntjens, Jan; Lu, Hsiao-Ming; Paganetti, Harald
2017-11-01
Our group previously introduced an in vivo proton range verification methodology in which a silicon diode array system is used to correlate the dose rate profile per range modulation wheel cycle of the detector signal to the water-equivalent path length (WEPL) for passively scattered proton beam delivery. The implementation of this system requires a set of calibration data to establish a beam-specific response to WEPL fit for the selected 'scout' beam (a 1 cm overshoot of the predicted detector depth with a dose of 4 cGy) in water-equivalent plastic. This necessitates a separate set of measurements for every 'scout' beam that may be appropriate to the clinical case. The current study demonstrates the use of Monte Carlo simulations for calibration of the time-resolved diode dosimetry technique. Measurements for three 'scout' beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS). The 'scout' beams were then applied in the simulation environment to simulated water-equivalent plastic, a CT of water-equivalent plastic, and a patient CT data set to assess uncertainty. Simulated detector response in water-equivalent plastic was validated against measurements for 'scout' spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) to within 3.4 mm for all beams, and to within 1 mm in the region where the detector is expected to lie. Feasibility has been shown for performing the calibration of the detector response for three 'scout' beams through simulation for the time-resolved diode dosimetry technique in passive scattered proton delivery. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.
2001-06-01
We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.
A general solution strategy of modified power method for higher mode solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
2016-01-15
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
NASA Astrophysics Data System (ADS)
Rose, Michael Benjamin
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
NASA Astrophysics Data System (ADS)
Kwan, Betty P.; O'Brien, T. Paul
2015-06-01
The Aerospace Corporation performed a study to determine whether static percentiles of AE9/AP9 can be used to approximate dynamic Monte Carlo runs for radiation analysis of spiral transfer orbits. Solar panel degradation is a major concern for solar-electric propulsion because solar-electric propulsion depends on the power output of the solar panel. Different spiral trajectories have different radiation environments that could lead to solar panel degradation. Because the spiral transfer orbits only last weeks to months, an average environment does not adequately address the possible transient enhancements of the radiation environment that must be accounted for in optimizing the transfer orbit trajectory. Therefore, to optimize the trajectory, an ensemble of Monte Carlo simulations of AE9/AP9 would normally be run for every spiral trajectory to determine the 95th percentile radiation environment. To avoid performing lengthy Monte Carlo dynamic simulations for every candidate spiral trajectory in the optimization, we found a static percentile that would be an accurate representation of the full Monte Carlo simulation for a representative set of spiral trajectories. For 3 LEO to GEO and 1 LEO to MEO trajectories, a static 90th percentile AP9 is a good approximation of the 95th percentile fluence with dynamics for 4-10 MeV protons, and a static 80th percentile AE9 is a good approximation of the 95th percentile fluence with dynamics for 0.5-2 MeV electrons. While the specific percentiles chosen cannot necessarily be used in general for other orbit trade studies, the concept of determining a static percentile as a quick approximation to a full Monte Carlo ensemble of simulations can likely be applied to other orbit trade studies. We expect the static percentile to depend on the region of space traversed, the mission duration, and the radiation effect considered.
USEPA SHEDS MODEL: METHODOLOGY FOR EXPOSURE ASSESSMENT FOR WOOD PRESERVATIVES
A physically-based, Monte Carlo probabilistic model (SHEDS-Wood: Stochastic Human Exposure and Dose Simulation model for wood preservatives) has been applied to assess the exposure and dose of children to arsenic (As) and chromium (Cr) from contact with chromated copper arsenat...
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Path Integral Monte Carlo Simulations of Warm Dense Matter and Plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Militzer, Burkhard
2018-01-13
New path integral Monte Carlo simulation (PIMC) techniques will be developed and applied to derive the equation of state (EOS) for the regime of warm dense matter and dense plasmas where existing first-principles methods cannot be applied. While standard density functional theory has been used to accurately predict the structure of many solids and liquids up to temperatures on the order of 10,000 K, this method is not applicable at much higher temperature where electronic excitations become important because the number of partially occupied electronic orbitals reaches intractably large numbers and, more importantly, the use of zero-temperature exchange-correlation functionals introducesmore » an uncontrolled approximation. Here we focus on PIMC methods that become more and more efficient with increasing temperatures and still include all electronic correlation effects. In this approach, electronic excitations increase the efficiency rather than reduce it. While it has commonly been assumed such methods can only be applied to elements without core electrons like hydrogen and helium, we recently showed how to extend PIMC to heavier elements by performing the first PIMC simulations of carbon and water plasmas [Driver, Militzer, Phys. Rev. Lett. 108 (2012) 115502]. Here we propose to continue this important development to extend the reach of PIMC simulations to yet heavier elements and also lower temperatures. The goal is to provide a robust first-principles simulation method that can accurately and efficiently study materials with excited electrons at solid-state densities in order to access parts of the phase diagram such the regime of warm dense matter and plasmas where so far only more approximate, semi-analytical methods could be applied.« less
Simulation of phase equilibria
NASA Astrophysics Data System (ADS)
Martin, Marcus Gary
The focus of this thesis is on the use of configurational bias Monte Carlo in the Gibbs ensemble. Unlike Metropolis Monte Carlo, which is reviewed in chapter I, configurational bias Monte Carlo uses an underlying Markov chain transition matrix which is asymmetric in such a way that it is more likely to attempt to move to a molecular conformation which has a lower energy than to one with a higher energy. Chapter II explains how this enables efficient simulation of molecules with complex architectures (long chains and branched molecules) for coexisting fluid phases (liquid, vapor, or supercritical), and also presents several of our recent extensions to this method. In chapter III we discuss the development of the Transferable Potentials for Phase Equilibria United Atom (TraPPE-UA) force field which accurately describes the fluid phase coexistence for linear and branched alkanes. Finally, in the fourth chapter the methods and the force field are applied to systems ranging from supercritical extraction to gas chromatography to illustrate the power and versatility of our approach.
Fukata, Kyohei; Sugimoto, Satoru; Kurokawa, Chie; Saito, Akito; Inoue, Tatsuya; Sasai, Keisuke
2018-06-01
The difficulty of measuring output factor (OPF) in a small field has been frequently discussed in recent publications. This study is aimed to determine the OPF in a small field using 10-MV photon beam and stereotactic conical collimator (cone). The OPF was measured by two diode detectors (SFD, EDGE detector) and one micro-ion chamber (PinPoint 3D chamber) in a water phantom. A Monte Carlo simulation using simplified detector model was performed to obtain the correction factor for the detector measurements. About 12% OPF difference was observed in the measurement at the smallest field (7.5 mm diameter) for EDGE detector and PinPoint 3D chamber. By applying the Monte Carlo-based correction factor to the measurement, the maximum discrepancy among the three detectors was reduced to within 3%. The results indicate that determination of OPF in a small field should be carefully performed. Especially, detector choice and appropriate correction factor application are very important in this regard.
Monte Carlo technique for very large ising models
NASA Astrophysics Data System (ADS)
Kalle, C.; Winkelmann, V.
1982-08-01
Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.
Yüksel, Yusuf; Akıncı, Ümit
2016-12-07
Using Monte Carlo simulations, we have investigated the dynamic phase transition properties of magnetic nanoparticles with ferromagnetic core coated by an antiferromagnetic shell structure. Effects of field amplitude and frequency on the thermal dependence of magnetizations, magnetization reversal mechanisms during hysteresis cycles, as well as on the exchange bias and coercive fields have been examined, and the feasibility of applying dynamic magnetic fields on the particle have been discussed for technological and biomedical purposes.
A Wigner Monte Carlo approach to density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.
2014-08-01
In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales verymore » well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.« less
Rojas-Calderón, E L; Ávila, O; Ferro-Flores, G
2018-05-01
S-values (dose per unit of cumulated activity) for alpha particle-emitting radionuclides and monoenergetic alpha sources placed in the nuclei of three cancer cell models (MCF7, MDA-MB231 breast cancer cells and PC3 prostate cancer cells) were obtained by Monte Carlo simulation. The MCNPX code was used to calculate the fraction of energy deposited in the subcellular compartments due to the alpha sources in order to obtain the S-values. A comparison with internationally accepted S-values reported by the MIRD Cellular Committee for alpha sources in three sizes of spherical cells was also performed leading to an agreement within 4% when an alpha extended source uniformly distributed in the nucleus is simulated. This result allowed to apply the Monte Carlo Methodology to evaluate S-values for alpha particles in cancer cells. The calculation of S-values for nucleus, cytoplasm and membrane of cancer cells considering their particular geometry, distribution of the radionuclide source and chemical composition by means of Monte Carlo simulation provides a good approach for dosimetry assessment of alpha emitters inside cancer cells. Results from this work provide information and tools that may help researchers in the selection of appropriate radiopharmaceuticals in alpha-targeted cancer therapy and improve its dosimetry evaluation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Percentage depth dose evaluation in heterogeneous media using thermoluminescent dosimetry
da Rosa, L.A.R.; Campos, L.T.; Alves, V.G.L.; Batista, D.V.S.; Facure, A.
2010-01-01
The purpose of this study is to investigate the influence of lung heterogeneity inside a soft tissue phantom on percentage depth dose (PDD). PDD curves were obtained experimentally using LiF:Mg,Ti (TLD‐100) thermoluminescent detectors and applying Eclipse treatment planning system algorithms Batho, modified Batho (M‐Batho or BMod), equivalent TAR (E‐TAR or EQTAR), and anisotropic analytical algorithm (AAA) for a 15 MV photon beam and field sizes of 1×1,2×2,5×5, and 10×10cm2. Monte Carlo simulations were performed using the DOSRZnrc user code of EGSnrc. The experimental results agree with Monte Carlo simulations for all irradiation field sizes. Comparisons with Monte Carlo calculations show that the AAA algorithm provides the best simulations of PDD curves for all field sizes investigated. However, even this algorithm cannot accurately predict PDD values in the lung for field sizes of 1×1 and 2×2cm2. An overdosage in the lung of about 40% and 20% is calculated by the AAA algorithm close to the interface soft tissue/lung for 1×1 and 2×2cm2 field sizes, respectively. It was demonstrated that differences of 100% between Monte Carlo results and the algorithms Batho, modified Batho, and equivalent TAR responses may exist inside the lung region for the 1×1cm2 field. PACS number: 87.55.kd
MOCCA-SURVEY Database I: Is NGC 6535 a dark star cluster harbouring an IMBH?
NASA Astrophysics Data System (ADS)
Askar, Abbas; Bianchini, Paolo; de Vita, Ruggero; Giersz, Mirek; Hypki, Arkadiusz; Kamann, Sebastian
2017-01-01
We describe the dynamical evolution of a unique type of dark star cluster model in which the majority of the cluster mass at Hubble time is dominated by an intermediate-mass black hole (IMBH). We analysed results from about 2000 star cluster models (Survey Database I) simulated using the Monte Carlo code MOnte Carlo Cluster simulAtor and identified these dark star cluster models. Taking one of these models, we apply the method of simulating realistic `mock observations' by utilizing the Cluster simulatiOn Comparison with ObservAtions (COCOA) and Simulating Stellar Cluster Observation (SISCO) codes to obtain the photometric and kinematic observational properties of the dark star cluster model at 12 Gyr. We find that the perplexing Galactic globular cluster NGC 6535 closely matches the observational photometric and kinematic properties of the dark star cluster model presented in this paper. Based on our analysis and currently observed properties of NGC 6535, we suggest that this globular cluster could potentially harbour an IMBH. If it exists, the presence of this IMBH can be detected robustly with proposed kinematic observations of NGC 6535.
A two-dimensional model of water: Solvation of nonpolar solutes
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Southall, N. T.; Dill, K. A.
2002-01-01
We recently applied a Wertheim integral equation theory (IET) and a thermodynamic perturbation theory (TPT) to the Mercedes-Benz (MB) model of pure water. These analytical theories offer the advantage of being computationally less intensive than the Monte Carlo simulations by orders of magnitudes. The long-term goal of this work is to develop analytical theories of water that can handle orientation-dependent interactions and the MB model serves as a simple workbench for this development. Here we apply the IET and TPT to the hydrophobic effect, the transfer of a nonpopular solute into MB water. As before, we find that the theories reproduce the Monte Carlo results quite accurately at higher temperatures, while they predict the qualitative trends in cold water.
Monte-Carlo simulation of a stochastic differential equation
NASA Astrophysics Data System (ADS)
Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG
2017-12-01
For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.
NASA Astrophysics Data System (ADS)
Yu, Tianhao; Li, Qian; Li, Lin; Zhou, Chuanqing
2016-10-01
Accuracy of photoacoustic signal is the crux on measurement of oxygen saturation in functional photoacoustic imaging, which is influenced by factors such as defocus of laser beam, curve shape of large vessels and nonlinear saturation effect of optical absorption in biological tissues. We apply Monte Carlo model to simulate energy deposition in tissues and obtain photoacoustic signals reaching a simulated focused surface detector to investigate corresponding influence of these factors. We also apply compensation on photoacoustic imaging of in vivo cat cerebral cortex blood vessels, in which signals from different lateral positions of vessels are corrected based on simulation results. And this process on photoacoustic images can improve the smoothness and accuracy of oxygen saturation results.
NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawara, R.; Ishikawa, M., E-mail: masayori@med.hokudai.ac.jp
The anode pulse of a photomultiplier tube (PMT) coupled with a scintillator is used for pulse shape discrimination (PSD) analysis. We have developed a novel emulation technique for the PMT anode pulse based on optical photon transport and a PMT response function. The photon transport was calculated using Geant4 Monte Carlo code and the response function with a BC408 organic scintillator. The obtained percentage RMS value of the difference between the measured and simulated pulse with suitable scintillation properties using GSO:Ce (0.4, 1.0, 1.5 mol%), LaBr{sub 3}:Ce and BGO scintillators were 2.41%, 2.58%, 2.16%, 2.01%, and 3.32%, respectively. The proposedmore » technique demonstrates high reproducibility of the measured pulse and can be applied to simulation studies of various radiation measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, H.L.; Sides, S.W.; Novotny, M.A.
1996-12-31
Recently experimental techniques, such as magnetic force microscopy (MFM), have enabled the magnetic state of individual sub-micron particles to be resolved. Motivated by these experimental developments, the authors use Monte Carlo simulations of two-dimensional kinetic Ising ferromagnets to study the magnetic relaxation in a negative applied field of a grain with an initial magnetization m{sub 0} = + 1. They use classical droplet theory to predict the functional forms for some quantities which can be observed by MFM. An example is the probability that the magnetization is positive, which is a function of time, field, grain size, and grain dimensionality.more » The qualitative agreement between experiments and their simulations of switching in individual single-domain ferromagnets indicates that the switching mechanism in such particles may involve local nucleation and subsequent growth of droplets of the stable phase.« less
NASA Astrophysics Data System (ADS)
Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.
2005-09-01
Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.
Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang
2010-03-01
The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.
Game of Life on the Equal Degree Random Lattice
NASA Astrophysics Data System (ADS)
Shao, Zhi-Gang; Chen, Tao
2010-12-01
An effective matrix method is performed to build the equal degree random (EDR) lattice, and then a cellular automaton game of life on the EDR lattice is studied by Monte Carlo (MC) simulation. The standard mean field approximation (MFA) is applied, and then the density of live cells is given ρ=0.37017 by MFA, which is consistent with the result ρ=0.37±0.003 by MC simulation.
Temperature scaling method for Markov chains.
Crosby, Lonnie D; Windus, Theresa L
2009-01-22
The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Monte Carlo Simulations for VLBI2010
NASA Astrophysics Data System (ADS)
Wresnik, J.; Böhm, J.; Schuh, H.
2007-07-01
Monte Carlo simulations are carried out at the Institute of Geodesy and Geophysics (IGG), Vienna, and at Goddard Space Flight Center (GSFC), Greenbelt (USA), with the goal to design a new geodetic Very Long Baseline Interferometry (VLBI) system. Influences of the schedule, the network geometry and the main stochastic processes on the geodetic results are investigated. Therefore schedules are prepared with the software package SKED (Vandenberg 1999), and different strategies are applied to produce temporally very dense schedules which are compared in terms of baseline length repeatabilities. For the simulation of VLBI observations a Monte Carlo Simulator was set up which creates artificial observations by randomly simulating wet zenith delay and clock values as well as additive white noise representing the antenna errors. For the simulation at IGG the VLBI analysis software OCCAM (Titov et al. 2004) was adapted. Random walk processes with power spectrum densities of 0.7 and 0.1 psec2/sec are used for the simulation of wet zenith delays. The clocks are simulated with Allan Standard Deviations of 1*10^-14 @ 50 min and 2*10^-15 @ 15 min and three levels of white noise, 4 psec, 8 psec and, 16 psec, are added to the artificial observations. The variations of the power spectrum densities of the clocks and wet zenith delays, and the application of different white noise levels show clearly that the wet delay is the critical factor for the improvement of the geodetic VLBI system. At GSFC the software CalcSolve is used for the VLBI analysis, therefore a comparison between the software packages OCCAM and CalcSolve was done with simulated data. For further simulations the wet zenith delay was modeled by a turbulence model. This data was provided by Nilsson T. and was added to the simulation work. Different schedules have been run.
Hybrid computer optimization of systems with random parameters
NASA Technical Reports Server (NTRS)
White, R. C., Jr.
1972-01-01
A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.
Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ren, Wei; Liu, Hong; Jin, Shi
2014-12-01
In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.
NASA Astrophysics Data System (ADS)
Pan, Boan; Fang, Xiang; Liu, Weichao; Li, Nanxi; Zhao, Ke; Li, Ting
2018-02-01
Near infrared spectroscopy (NIRS) and diffuse correlation spectroscopy (DCS) has been used to measure brain activation, which are clinically important. Monte Carlo simulation has been applied to the near infrared light propagation model in biological tissue, and has the function of predicting diffusion and brain activation. However, previous studies have rarely considered hair and hair follicles as a contributing factor. Here, we attempt to use MCVM (Monte Carlo simulation based on 3D voxelized media) to examine light transmission, absorption, fluence, spatial sensitivity distribution (SSD) and brain activation judgement in the presence or absence of the hair follicles. The data in this study is a series of high-resolution cryosectional color photograph of a standing Chinse male adult. We found that the number of photons transmitted under the scalp decreases dramatically and the photons exported to detector is also decreasing, as the density of hair follicles increases. If there is no hair follicle, the above data increase and has the maximum value. Meanwhile, the light distribution and brain activation have a stable change along with the change of hair follicles density. The findings indicated hair follicles make influence of NIRS in light distribution and brain activation judgement.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Direct Simulation Monte Carlo Simulations of Low Pressure Semiconductor Plasma Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gochberg, L. A.; Ozawa, T.; Deng, H.
2008-12-31
The two widely used plasma deposition tools for semiconductor processing are Ionized Metal Physical Vapor Deposition (IMPVD) of metals using either planar or hollow cathode magnetrons (HCM), and inductively-coupled plasma (ICP) deposition of dielectrics in High Density Plasma Chemical Vapor Deposition (HDP-CVD) reactors. In these systems, the injected neutral gas flows are generally in the transonic to supersonic flow regime. The Hybrid Plasma Equipment Model (HPEM) has been developed and is strategically and beneficially applied to the design of these tools and their processes. For the most part, the model uses continuum-based techniques, and thus, as pressures decrease below 10more » mTorr, the continuum approaches in the model become questionable. Modifications have been previously made to the HPEM to significantly improve its accuracy in this pressure regime. In particular, the Ion Monte Carlo Simulation (IMCS) was added, wherein a Monte Carlo simulation is used to obtain ion and neutral velocity distributions in much the same way as in direct simulation Monte Carlo (DSMC). As a further refinement, this work presents the first steps towards the adaptation of full DSMC calculations to replace part of the flow module within the HPEM. Six species (Ar, Cu, Ar*, Cu*, Ar{sup +}, and Cu{sup +}) are modeled in DSMC. To couple SMILE as a module to the HPEM, source functions for species, momentum and energy from plasma sources will be provided by the HPEM. The DSMC module will then compute a quasi-converged flow field that will provide neutral and ion species densities, momenta and temperatures. In this work, the HPEM results for a hollow cathode magnetron (HCM) IMPVD process using the Boltzmann distribution are compared with DSMC results using portions of those HPEM computations as an initial condition.« less
Neon ion beam induced pattern formation on amorphous carbon surfaces
NASA Astrophysics Data System (ADS)
Bobes, Omar; Hofsäss, Hans; Zhang, Kun
2018-02-01
We investigate the ripple pattern formation on amorphous carbon surfaces at room temperature during low energy Ne ion irradiation as a function of the ion incidence angle. Monte Carlo simulations of the curvature coefficients applied to the Bradley-Harper and Cater-Vishnyakov models, including the recent extensions by Harrison-Bradley and Hofsäss predict that pattern formation on amorphous carbon thin films should be possible for low energy Ne ions from 250 eV up to 1500 eV. Moreover, simulations are able to explain the absence of pattern formation in certain cases. Our experimental results are compared with prediction using current linear theoretical models and applying the crater function formalism, as well as Monte Carlo simulations to calculate curvature coefficients using the SDTrimSP program. Calculations indicate that no patterns should be generated up to 45° incidence angle if the dynamic behavior of the thickness of the ion irradiated layer introduced by Hofsäss is taken into account, while pattern formation most pronounced from 50° for ion energy between 250 eV and 1500 eV, which are in good agreement with our experimental data.
Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma
NASA Astrophysics Data System (ADS)
Seibert, Stanley; Latorre, Anthony
2012-03-01
We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.
On the simulation of indistinguishable fermions in the many-body Wigner formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.
2015-01-01
The simulation of quantum systems consisting of interacting, indistinguishable fermions is an incredible mathematical problem which poses formidable numerical challenges. Many sophisticated methods addressing this problem are available which are based on the many-body Schrödinger formalism. Recently a Monte Carlo technique for the resolution of the many-body Wigner equation has been introduced and successfully applied to the simulation of distinguishable, spinless particles. This numerical approach presents several advantages over other methods. Indeed, it is based on an intuitive formalism in which quantum systems are described in terms of a quasi-distribution function, and highly scalable due to its Monte Carlo nature.more » In this work, we extend the many-body Wigner Monte Carlo method to the simulation of indistinguishable fermions. To this end, we first show how fermions are incorporated into the Wigner formalism. Then we demonstrate that the Pauli exclusion principle is intrinsic to the formalism. As a matter of fact, a numerical simulation of two strongly interacting fermions (electrons) is performed which clearly shows the appearance of a Fermi (or exchange–correlation) hole in the phase-space, a clear signature of the presence of the Pauli principle. To conclude, we simulate 4, 8 and 16 non-interacting fermions, isolated in a closed box, and show that, as the number of fermions increases, we gradually recover the Fermi–Dirac statistics, a clear proof of the reliability of our proposed method for the treatment of indistinguishable particles.« less
Analysis of Naval Ammunition Stock Positioning
2015-12-01
model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17
Yang, Y; Pan, L; Lightstone, F C; Merz, K M
2016-01-01
The potential of mean force simulations, widely applied in Monte Carlo or molecular dynamics simulations, are useful tools to examine the free energy variation as a function of one or more specific reaction coordinate(s) for a given system. Implementation of the potential of mean force in the simulations of biological processes, such as enzyme catalysis, can help overcome the difficulties of sampling specific regions on the energy landscape and provide useful insights to understand the catalytic mechanism. The potential of mean force simulations usually require many, possibly parallelizable, short simulations instead of a few extremely long simulations and, therefore, are fairly manageable for most research facilities. In this chapter, we provide detailed protocols for applying the potential of mean force simulations to investigate enzymatic mechanisms for several different enzyme systems. © 2016 Elsevier Inc. All rights reserved.
Bootstrapping Confidence Intervals for Robust Measures of Association.
ERIC Educational Resources Information Center
King, Jason E.
A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…
GEANT4 and Secondary Particle Production
NASA Technical Reports Server (NTRS)
Patterson, Jeff
2004-01-01
GEANT 4 is a Monte Carlo tool set developed by the High Energy Physics Community (CERN, SLAC, etc) to perform simulations of complex particle detectors. GEANT4 is the ideal tool to study radiation transport and should be applied to space environments and the complex geometries of modern day spacecraft.
NASA Astrophysics Data System (ADS)
Loh, Y. L.; Yao, D. X.; Carlson, E. W.
2008-04-01
A new class of two-dimensional magnetic materials Cu9X2(cpa)6ṡxH2O ( cpa=2 -carboxypentonic acid; X=F,Cl,Br ) was recently fabricated in which Cu sites form a triangular kagome lattice (TKL). As the simplest model of geometric frustration in such a system, we study the thermodynamics of Ising spins on the TKL using exact analytic method as well as Monte Carlo simulations. We present the free energy, internal energy, specific heat, entropy, sublattice magnetizations, and susceptibility. We describe the rich phase diagram of the model as a function of coupling constants, temperature, and applied magnetic field. For frustrated interactions in the absence of applied field, the ground state is a spin liquid phase with residual entropy per spin s0/kB=(1)/(9)ln72≈0.4752… . In weak applied field, the system maps to the dimer model on a honeycomb lattice, with residual entropy 0.0359 per spin and quasi-long-range order with power-law spin-spin correlations that should be detectable by neutron scattering. The power-law correlations become exponential at finite temperatures, but the correlation length may still be long.
Modeling laser speckle imaging of perfusion in the skin (Conference Presentation)
NASA Astrophysics Data System (ADS)
Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard
2016-02-01
Laser speckle imaging (LSI) enables visualization of relative blood flow and perfusion in the skin. It is frequently applied to monitor treatment of vascular malformations such as port wine stain birthmarks, and measure changes in perfusion due to peripheral vascular disease. We developed a computational Monte Carlo simulation of laser speckle contrast imaging to quantify how tissue optical properties, blood vessel depths and speeds, and tissue perfusion affect speckle contrast values originating from coherent excitation. The simulated tissue geometry consisted of multiple layers to simulate the skin, or incorporated an inclusion such as a vessel or tumor at different depths. Our simulation used a 30x30mm uniform flat light source to optically excite the region of interest in our sample to better mimic wide-field imaging. We used our model to simulate how dynamically scattered photons from a buried blood vessel affect speckle contrast at different lateral distances (0-1mm) away from the vessel, and how these speckle contrast changes vary with depth (0-1mm) and flow speed (0-10mm/s). We applied the model to simulate perfusion in the skin, and observed how different optical properties, such as epidermal melanin concentration (1%-50%) affected speckle contrast. We simulated perfusion during a systolic forearm occlusion and found that contrast decreased by 35% (exposure time = 10ms). Monte Carlo simulations of laser speckle contrast give us a tool to quantify what regions of the skin are probed with laser speckle imaging, and measure how the tissue optical properties and blood flow affect the resulting images.
NASA Astrophysics Data System (ADS)
Chang, Mingyu; Sang, Chaofeng; Sun, Zhenyue; Hu, Wanpeng; Wang, Dezhen
2018-05-01
A Particle-In-Cell (PIC) with Monte Carlo Collision (MCC) model is applied to study the effects of particle recycling on divertor plasma in the present work. The simulation domain is the scrape-off layer of the tokamak in one-dimension along the magnetic field line. At the divertor plate, the reflected deuterium atoms (D) and thermally released deuterium molecules (D2) are considered. The collisions between the plasma particles (e and D+) and recycled neutral particles (D and D2) are described by the MCC method. It is found that the recycled neutral particles have a great impact on divertor plasma. The effects of different collisions on the plasma are simulated and discussed. Moreover, the impacts of target materials on the plasma are simulated by comparing the divertor with Carbon (C) and Tungsten (W) targets. The simulation results show that the energy and momentum losses of the C target are larger than those of the W target in the divertor region even without considering the impurity particles, whereas the W target has a more remarkable influence on the core plasma.
Quantitative basis for component factors of gas flow proportional counting efficiencies
NASA Astrophysics Data System (ADS)
Nichols, Michael C.
This dissertation investigates the counting efficiency calibration of a gas flow proportional counter with beta-particle emitters in order to (1) determine by measurements and simulation the values of the component factors of beta-particle counting efficiency for a proportional counter, (2) compare the simulation results and measured counting efficiencies, and (3) determine the uncertainty of the simulation and measurements. Monte Carlo simulation results by the MCNP5 code were compared with measured counting efficiencies as a function of sample thickness for 14C, 89Sr, 90Sr, and 90Y. The Monte Carlo model simulated strontium carbonate with areal thicknesses from 0.1 to 35 mg cm-2. The samples were precipitated as strontium carbonate with areal thicknesses from 3 to 33 mg cm-2 , mounted on membrane filters, and counted on a low background gas flow proportional counter. The estimated fractional standard deviation was 2--4% (except 6% for 14C) for efficiency measurements of the radionuclides. The Monte Carlo simulations have uncertainties estimated to be 5 to 6 percent for carbon-14 and 2.4 percent for strontium-89, strontium-90, and yttrium-90. The curves of simulated counting efficiency vs. sample areal thickness agreed within 3% of the curves of best fit drawn through the 25--49 measured points for each of the four radionuclides. Contributions from this research include development of uncertainty budgets for the analytical processes; evaluation of alternative methods for determining chemical yield critical to the measurement process; correcting a bias found in the MCNP normalization of beta spectra histogram; clarifying the interpretation of the commonly used ICRU beta-particle spectra for use by MCNP; and evaluation of instrument parameters as applied to the simulation model to obtain estimates of the counting efficiency from simulated pulse height tallies.
NASA Astrophysics Data System (ADS)
Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria
2017-08-01
Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.
Modeling of multi-band drift in nanowires using a full band Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Hathwar, Raghuraj; Saraniti, Marco; Goodnick, Stephen M.
2016-07-01
We report on a new numerical approach for multi-band drift within the context of full band Monte Carlo (FBMC) simulation and apply this to Si and InAs nanowires. The approach is based on the solution of the Krieger and Iafrate (KI) equations [J. B. Krieger and G. J. Iafrate, Phys. Rev. B 33, 5494 (1986)], which gives the probability of carriers undergoing interband transitions subject to an applied electric field. The KI equations are based on the solution of the time-dependent Schrödinger equation, and previous solutions of these equations have used Runge-Kutta (RK) methods to numerically solve the KI equations. This approach made the solution of the KI equations numerically expensive and was therefore only applied to a small part of the Brillouin zone (BZ). Here we discuss an alternate approach to the solution of the KI equations using the Magnus expansion (also known as "exponential perturbation theory"). This method is more accurate than the RK method as the solution lies on the exponential map and shares important qualitative properties with the exact solution such as the preservation of the unitary character of the time evolution operator. The solution of the KI equations is then incorporated through a modified FBMC free-flight drift routine and applied throughout the nanowire BZ. The importance of the multi-band drift model is then demonstrated for the case of Si and InAs nanowires by simulating a uniform field FBMC and analyzing the average carrier energies and carrier populations under high electric fields. Numerical simulations show that the average energy of the carriers under high electric field is significantly higher when multi-band drift is taken into consideration, due to the interband transitions allowing carriers to achieve higher energies.
Numerical integration of detector response functions via Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2014-03-01
The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.
Theory for the solvation of nonpolar solutes in water
NASA Astrophysics Data System (ADS)
Urbic, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.
2007-11-01
We recently developed an angle-dependent Wertheim integral equation theory (IET) of the Mercedes-Benz (MB) model of pure water [Silverstein et al., J. Am. Chem. Soc. 120, 3166 (1998)]. Our approach treats explicitly the coupled orientational constraints within water molecules. The analytical theory offers the advantage of being less computationally expensive than Monte Carlo simulations by two orders of magnitude. Here we apply the angle-dependent IET to studying the hydrophobic effect, the transfer of a nonpolar solute into MB water. We find that the theory reproduces the Monte Carlo results qualitatively for cold water and quantitatively for hot water.
Theory for the solvation of nonpolar solutes in water.
Urbic, T; Vlachy, V; Kalyuzhnyi, Yu V; Dill, K A
2007-11-07
We recently developed an angle-dependent Wertheim integral equation theory (IET) of the Mercedes-Benz (MB) model of pure water [Silverstein et al., J. Am. Chem. Soc. 120, 3166 (1998)]. Our approach treats explicitly the coupled orientational constraints within water molecules. The analytical theory offers the advantage of being less computationally expensive than Monte Carlo simulations by two orders of magnitude. Here we apply the angle-dependent IET to studying the hydrophobic effect, the transfer of a nonpolar solute into MB water. We find that the theory reproduces the Monte Carlo results qualitatively for cold water and quantitatively for hot water.
Frequency-resolved Monte Carlo.
López Carreño, Juan Camilo; Del Valle, Elena; Laussy, Fabrice P
2018-05-03
We adapt the Quantum Monte Carlo method to the cascaded formalism of quantum optics, allowing us to simulate the emission of photons of known energy. Statistical processing of the photon clicks thus collected agrees with the theory of frequency-resolved photon correlations, extending the range of applications based on correlations of photons of prescribed energy, in particular those of a photon-counting character. We apply the technique to autocorrelations of photon streams from a two-level system under coherent and incoherent pumping, including the Mollow triplet regime where we demonstrate the direct manifestation of leapfrog processes in producing an increased rate of two-photon emission events.
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
García-Pareja, S; Galán, P; Manzano, F; Brualla, L; Lallena, A M
2010-07-01
In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within approximately 3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry.
Bostani, Maryam; Mueller, Jonathon W; McMillan, Kyle; Cody, Dianna D; Cagnon, Chris H; DeMarco, John J; McNitt-Gray, Michael F
2015-02-01
The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. The calculated mean percent difference between TLD measurements and Monte Carlo simulations was -4.9% with standard deviation of 8.7% and a range of -22.7% to 5.7%. The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
SU-F-T-370: A Fast Monte Carlo Dose Engine for Gamma Knife
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, T; Zhou, L; Li, Y
2016-06-15
Purpose: To develop a fast Monte Carlo dose calculation algorithm for Gamma Knife. Methods: To make the simulation more efficient, we implemented the track repeating technique on GPU. We first use EGSnrc to pre-calculate the photon and secondary electron tracks in water from two mono-energy photons of 60Co. The total photon mean free paths for different materials and energies are obtained from NIST. During simulation, each entire photon track was first loaded to shared memory for each block, the incident original photon was then splitted to Nthread sub-photons, each thread transport one sub-photon, the Russian roulette technique was applied formore » scattered and bremsstrahlung photons. The resultant electrons from photon interactions are simulated by repeating the recorded electron tracks. The electron step length is stretched/shrunk proportionally based on the local density and stopping power ratios of the local material. Energy deposition in a voxel is proportional to the fraction of the equivalent step length in that voxel. To evaluate its accuracy, dose deposition in a 300mm*300mm*300mm water phantom is calculated, and compared to EGSnrc results. Results: Both PDD and OAR showed great agreements (within 0.5%) between our dose engine result and the EGSnrc result. It only takes less than 1 min for every simulation, being reduced up to ∼40 times compared to EGSnrc simulations. Conclusion: We have successfully developed a fast Monte Carlo dose engine for Gamma Knife.« less
Chiang, Chia-Wen; Wang, Yong; Sun, Peng; Lin, Tsen-Hsuan; Trinkaus, Kathryn; Cross, Anne H.; Song, Sheng-Kwei
2014-01-01
The effect of extra-fiber structural and pathological components confounding diffusion tensor imaging (DTI) computation was quantitatively investigated using data generated by both Monte-Carlo simulations and tissue phantoms. Increased extent of vasogenic edema, by addition of various amount of gel to fixed normal mouse trigeminal nerves or by increasing non-restricted isotropic diffusion tensor components in Monte-Carlo simulations, significantly decreased fractional anisotropy (FA), increased radial diffusivity, while less significantly increased axial diffusivity derived by DTI. Increased cellularity, mimicked by graded increase of the restricted isotropic diffusion tensor component in Monte-Carlo simulations, significantly decreased FA and axial diffusivity with limited impact on radial diffusivity derived by DTI. The MC simulation and tissue phantom data were also analyzed by the recently developed diffusion basis spectrum imaging (DBSI) to simultaneously distinguish and quantify the axon/myelin integrity and extra-fiber diffusion components. Results showed that increased cellularity or vasogenic edema did not affect the DBSI-derived fiber FA, axial or radial diffusivity. Importantly, the extent of extra-fiber cellularity and edema estimated by DBSI correlated with experimentally added gel and Monte-Carlo simulations. We also examined the feasibility of applying 25-direction diffusion encoding scheme for DBSI analysis on coherent white matter tracts. Results from both phantom experiments and simulations suggested that the 25-direction diffusion scheme provided comparable DBSI estimation of both fiber diffusion parameters and extra-fiber cellularity/edema extent as those by 99-direction scheme. An in vivo 25-direction DBSI analysis was performed on experimental autoimmune encephalomyelitis (EAE, an animal model of human multiple sclerosis) optic nerve as an example to examine the validity of derived DBSI parameters with post-imaging immunohistochemistry verification. Results support that in vivo DBSI using 25-direction diffusion scheme correctly reflect the underlying axonal injury, demyelination, and inflammation of optic nerves in EAE mice. PMID:25017446
Smart darting diffusion Monte Carlo: Applications to lithium ion-Stockmayer clusters
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Jake, L. C.; Curotto, E.
2016-05-01
In a recent investigation [K. Roberts et al., J. Chem. Phys. 136, 074104 (2012)], we have shown that, for a sufficiently complex potential, the Diffusion Monte Carlo (DMC) random walk can become quasiergodic, and we have introduced smart darting-like moves to improve the sampling. In this article, we systematically characterize the bias that smart darting moves introduce in the estimate of the ground state energy of a bosonic system. We then test a simple approach to eliminate completely such bias from the results. The approach is applied for the determination of the ground state of lithium ion-n-dipoles clusters in the n = 8-20 range. For these, the smart darting diffusion Monte Carlo simulations find the same ground state energy and mixed-distribution as the traditional approach for n < 14. In larger systems we find that while the ground state energies agree quantitatively with or without smart darting moves, the mixed-distributions can be significantly different. Some evidence is offered to conclude that introducing smart darting-like moves in traditional DMC simulations may produce a more reliable ground state mixed-distribution.
Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.
2014-01-01
Abstract. Near-infrared spectroscopy (NIRS) estimations of the adult brain baseline optical properties based on a homogeneous model of the head are known to introduce significant contamination from extracerebral layers. More complex models have been proposed and occasionally applied to in vivo data, but their performances have never been characterized on realistic head structures. Here we implement a flexible fitting routine of time-domain NIRS data using graphics processing unit based Monte Carlo simulations. We compare the results for two different geometries: a two-layer slab with variable thickness of the first layer and a template atlas head registered to the subject’s head surface. We characterize the performance of the Monte Carlo approaches for fitting the optical properties from simulated time-resolved data of the adult head. We show that both geometries provide better results than the commonly used homogeneous model, and we quantify the improvement in terms of accuracy, linearity, and cross-talk from extracerebral layers. PMID:24407503
Computing Temperatures in Optically Thick Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Capuder, Lawrence F.. Jr.
2011-01-01
We worked with a Monte Carlo radiative transfer code to simulate the transfer of energy through protoplanetary disks, where planet formation occurs. The code tracks photons from the star into the disk, through scattering, absorption and re-emission, until they escape to infinity. High optical depths in the disk interior dominate the computation time because it takes the photon packet many interactions to get out of the region. High optical depths also receive few photons and therefore do not have well-estimated temperatures. We applied a modified random walk (MRW) approximation for treating high optical depths and to speed up the Monte Carlo calculations. The MRW is implemented by calculating the average number of interactions the photon packet will undergo in diffusing within a single cell of the spatial grid and then updating the packet position, packet frequencies, and local radiation absorption rate appropriately. The MRW approximation was then tested for accuracy and speed compared to the original code. We determined that MRW provides accurate answers to Monte Carlo Radiative transfer simulations. The speed gained from using MRW is shown to be proportional to the disk mass.
2010-02-08
popular pastime. Even in Biblical accounts, Roman soldiers cast lots for Christ’s robes. In earlier times, chance was something that occurred in nature...with the advent of blazing fast computing technology, our modern world of uncertainty can be explained with much more elegance through
DOT National Transportation Integrated Search
2011-05-01
This report describes an assessment of digital elevation models (DEMs) derived from : LiDAR data for a subset of the Ports of Los Angeles and Long Beach. A methodology : based on Monte Carlo simulation was applied to investigate the accuracy of DEMs ...
Molecular-level simulations of turbulence and its decay
Gallis, M. A.; Bitter, N. P.; Koehler, T. P.; ...
2017-02-08
Here, we provide the first demonstration that molecular-level methods based on gas kinetic theory and molecular chaos can simulate turbulence and its decay. The direct simulation Monte Carlo (DSMC) method, a molecular-level technique for simulating gas flows that resolves phenomena from molecular to hydrodynamic (continuum) length scales, is applied to simulate the Taylor-Green vortex flow. The DSMC simulations reproduce the Kolmogorov –5/3 law and agree well with the turbulent kinetic energy and energy dissipation rate obtained from direct numerical simulation of the Navier-Stokes equations using a spectral method. This agreement provides strong evidence that molecular-level methods for gases can bemore » used to investigate turbulent flows quantitatively.« less
NASA Astrophysics Data System (ADS)
Hashimoto, S.; Iwamoto, Y.; Sato, T.; Niita, K.; Boudard, A.; Cugnon, J.; David, J.-C.; Leray, S.; Mancusi, D.
2014-08-01
A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.
The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic
NASA Technical Reports Server (NTRS)
Armstrong, Curtis D.; Humphreys, William M.
2003-01-01
We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.
[Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].
Furuta, Takuya
2017-01-01
Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.
Parallel tempering Monte Carlo simulations of lysozyme orientation on charged surfaces
NASA Astrophysics Data System (ADS)
Xie, Yun; Zhou, Jian; Jiang, Shaoyi
2010-02-01
In this work, the parallel tempering Monte Carlo (PTMC) algorithm is applied to accurately and efficiently identify the global-minimum-energy orientation of a protein adsorbed on a surface in a single simulation. When applying the PTMC method to simulate lysozyme orientation on charged surfaces, it is found that lysozyme could easily be adsorbed on negatively charged surfaces with "side-on" and "back-on" orientations. When driven by dominant electrostatic interactions, lysozyme tends to be adsorbed on negatively charged surfaces with the side-on orientation for which the active site of lysozyme faces sideways. The side-on orientation agrees well with the experimental results where the adsorbed orientation of lysozyme is determined by electrostatic interactions. As the contribution from van der Waals interactions gradually dominates, the back-on orientation becomes the preferred one. For this orientation, the active site of lysozyme faces outward, which conforms to the experimental results where the orientation of adsorbed lysozyme is co-determined by electrostatic interactions and van der Waals interactions. It is also found that despite of its net positive charge, lysozyme could be adsorbed on positively charged surfaces with both "end-on" and back-on orientations owing to the nonuniform charge distribution over lysozyme surface and the screening effect from ions in solution. The PTMC simulation method provides a way to determine the preferred orientation of proteins on surfaces for biosensor and biomaterial applications.
Absolute dose calculations for Monte Carlo simulations of radiotherapy beams.
Popescu, I A; Shaw, C P; Zavgorodni, S F; Beckham, W A
2005-07-21
Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 x 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 x 10(13) +/- 1.0% electrons incident on the target and a total dose of 20.87 cGy +/- 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.
Groves, Chris; Kimber, Robin G E; Walker, Alison B
2010-10-14
In this letter we evaluate the accuracy of the first reaction method (FRM) as commonly used to reduce the computational complexity of mesoscale Monte Carlo simulations of geminate recombination and the performance of organic photovoltaic devices. A wide range of carrier mobilities, degrees of energetic disorder, and applied electric field are considered. For the ranges of energetic disorder relevant for most polyfluorene, polythiophene, and alkoxy poly(phenylene vinylene) materials used in organic photovoltaics, the geminate separation efficiency predicted by the FRM agrees with the exact model to better than 2%. We additionally comment on the effects of equilibration on low-field geminate separation efficiency, and in doing so emphasize the importance of the energy at which geminate carriers are created upon their subsequent behavior.
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
SU-E-T-455: Characterization of 3D Printed Materials for Proton Beam Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, W; Siderits, R; McKenna, M
2014-06-01
Purpose: The widespread availability of low cost 3D printing technologies provides an alternative fabrication method for customized proton range modifying accessories such as compensators and boluses. However the material properties of the printed object are dependent on the printing technology used. In order to facilitate the application of 3D printing in proton therapy, this study investigated the stopping power of several printed materials using both proton pencil beam measurements and Monte Carlo simulations. Methods: Five 3–4 cm cubes fabricated using three 3D printing technologies (selective laser sintering, fused-deposition modeling and stereolithography) from five printers were investigated. The cubes were scannedmore » on a CT scanner and the depth dose curves for a mono-energetic pencil beam passing through the material were measured using a large parallel plate ion chamber in a water tank. Each cube was measured from two directions (perpendicular and parallel to printing plane) to evaluate the effects of the anisotropic material layout. The results were compared with GEANT4 Monte Carlo simulation using the manufacturer specified material density and chemical composition data. Results: Compared with water, the differences from the range pull back by the printed blocks varied and corresponded well with the material CT Hounsfield unit. The measurement results were in agreement with Monte Carlo simulation. However, depending on the technology, inhomogeneity existed in the printed cubes evidenced from CT images. The effect of such inhomogeneity on the proton beam is to be investigated. Conclusion: Printed blocks by three different 3D printing technologies were characterized for proton beam with measurements and Monte Carlo simulation. The effects of the printing technologies in proton range and stopping power were studied. The derived results can be applied when specific devices are used in proton radiotherapy.« less
NASA Astrophysics Data System (ADS)
Sawada, Ikuo
2012-10-01
We measured the radial distribution of electron density in a 200 mm parallel plate CCP and compared it with results from numerical simulations. The experiments were conducted with pure Ar gas with pressures ranging from 15 to 100 mTorr and 60 MHz applied at the top electrode with powers from 500 to 2000W. The measured electron profile is peaked in the center, and the relative non-uniformity is higher at 100 mTorr than at 15 mTorr. We compare the experimental results with simulations with both HPEM and Monte-Carlo/PIC codes. In HPEM simulations, we used either fluid or electron Monte-Carlo module, and the Poisson or the Electromagnetic solver. None of the models were able to duplicate the experimental results quantitatively. However, HPEM with the electron Monte-Carlo module and PIC qualitatively matched the experimental results. We will discuss the results from these models and how they illuminate the mechanism of enhanced electron central peak.[4pt] [1] T. Oshita, M. Matsukuma, S.Y. Kang, I. Sawada: The effect of non-uniform RF voltage in a CCP discharge, The 57^th JSAP Spring Meeting 2010[4pt] [2] I. Sawada, K. Matsuzaki, S.Y. Kang, T. Ohshita, M. Kawakami, S. Segawa: 1-st IC-PLANTS, 2008
Gill, Samuel C; Lim, Nathan M; Grinaway, Patrick B; Rustenburg, Ariën S; Fass, Josh; Ross, Gregory A; Chodera, John D; Mobley, David L
2018-05-31
Accurately predicting protein-ligand binding affinities and binding modes is a major goal in computational chemistry, but even the prediction of ligand binding modes in proteins poses major challenges. Here, we focus on solving the binding mode prediction problem for rigid fragments. That is, we focus on computing the dominant placement, conformation, and orientations of a relatively rigid, fragment-like ligand in a receptor, and the populations of the multiple binding modes which may be relevant. This problem is important in its own right, but is even more timely given the recent success of alchemical free energy calculations. Alchemical calculations are increasingly used to predict binding free energies of ligands to receptors. However, the accuracy of these calculations is dependent on proper sampling of the relevant ligand binding modes. Unfortunately, ligand binding modes may often be uncertain, hard to predict, and/or slow to interconvert on simulation time scales, so proper sampling with current techniques can require prohibitively long simulations. We need new methods which dramatically improve sampling of ligand binding modes. Here, we develop and apply a nonequilibrium candidate Monte Carlo (NCMC) method to improve sampling of ligand binding modes. In this technique, the ligand is rotated and subsequently allowed to relax in its new position through alchemical perturbation before accepting or rejecting the rotation and relaxation as a nonequilibrium Monte Carlo move. When applied to a T4 lysozyme model binding system, this NCMC method shows over 2 orders of magnitude improvement in binding mode sampling efficiency compared to a brute force molecular dynamics simulation. This is a first step toward applying this methodology to pharmaceutically relevant binding of fragments and, eventually, drug-like molecules. We are making this approach available via our new Binding modes of ligands using enhanced sampling (BLUES) package which is freely available on GitHub.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Sang, Chaofeng; Sun, Jizhong; Ren, Chunsheng; Wang, Dezhen
2009-02-01
A model of one dimensional in position and three dimensional in velocity space self-consistent particle in cell with Monte Carlo collision technique was employed to simulate the argon discharge between the needle and plane electrodes at high pressure, in which a nanosecond rectangular pulse was applied to the needle electrode. The work focused on the investigation of the spatiotemporal evolution of the discharge versus the needle tip size and working gas pressure. The simulation results showed that the discharge occurred mainly in the region near the needle tip at atmospheric pressure, and that the small radius of the needle tip led to easy discharge. Reducing the gas pressure gave rise to a transition from a corona discharge to a glowlike discharge along the needle-to-plane direction. The microscopic mechanism for the transition can arguably be attributed to the peak of high-energy electrons occurring before the breakdown; the magnitude of the number of these electrons determined whether the breakdown can take place.
NASA Astrophysics Data System (ADS)
Goldsworthy, M. J.
2012-10-01
One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.
Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun
2016-09-01
The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.
Rapid Monte Carlo Simulation of Gravitational Wave Galaxies
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2015-01-01
With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.
Highlights of Transient Plume Impingement Model Validation and Applications
NASA Technical Reports Server (NTRS)
Woronowicz, Michael
2011-01-01
This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for allmore » exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.« less
Effects of Uncertainties in Electric Field Boundary Conditions for Ring Current Simulations
NASA Astrophysics Data System (ADS)
Chen, Margaret W.; O'Brien, T. Paul; Lemon, Colby L.; Guild, Timothy B.
2018-01-01
Physics-based simulation results can vary widely depending on the applied boundary conditions. As a first step toward assessing the effect of boundary conditions on ring current simulations, we analyze the uncertainty of cross-polar cap potentials (CPCP) on electric field boundary conditions applied to the Rice Convection Model-Equilibrium (RCM-E). The empirical Weimer model of CPCP is chosen as the reference model and Defense Meteorological Satellite Program CPCP measurements as the reference data. Using temporal correlations from a statistical analysis of the "errors" between the reference model and data, we construct a Monte Carlo CPCP discrete time series model that can be generalized to other model boundary conditions. RCM-E simulations using electric field boundary conditions from the reference model and from 20 randomly generated Monte Carlo discrete time series of CPCP are performed for two large storms. During the 10 August 2000 storm main phase, the proton density at 10
Validation of the Monte Carlo simulator GATE for indium-111 imaging.
Assié, K; Gardin, I; Véra, P; Buvat, I
2005-07-07
Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.
Urbic, Tomaz
2016-01-01
In this paper we applied an analytical theory for the two dimensional dimerising fluid. We applied Wertheims thermodynamic perturbation theory (TPT) and integral equation theory (IET) for associative liquids to the dimerising model with arbitrary position of dimerising points from center of the particles. The theory was used to study thermodynamical and structural properties. To check the accuracy of the theories we compared theoretical results with corresponding results obtained by Monte Carlo computer simulations. The theories are accurate for the different positions of patches of the model at all values of the temperature and density studied. IET correctly predicts the pair correlation function of the model. Both TPT and IET are in good agreement with the Monte Carlo values of the energy, pressure, chemical potential, compressibility and ratios of free and bonded particles. PMID:28529396
Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, Surendra N.
1994-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.
Dynamic Monte Carlo description of thermal desorption processes
NASA Astrophysics Data System (ADS)
Weinketz, Sieghard
1994-07-01
The applicability of the dynamic Monte Carlo method of Fichthorn and Weinberg, in which the time evolution of a system is described in terms of the absolute number of different microscopic possible events and their associated transition rates, is discussed for the case of thermal desorption simulations. It is shown that the definition of the time increment at each successful event leads naturally to the macroscopic differential equation of desorption, in the case of simple first- and second-order processes in which the only possible events are desorption and diffusion. This equivalence is numerically demonstrated for a second-order case. In the sequence, the equivalence of this method with the Monte Carlo method of Sales and Zgrablich for more complex desorption processes, allowing for lateral interactions between adsorbates, is shown, even though the dynamic Monte Carlo method does not bear their limitation of a rapid surface diffusion condition, thus being able to describe a more complex ``kinetics'' of surface reactive processes, and therefore be applied to a wider class of phenomena, such as surface catalysis.
NASA Astrophysics Data System (ADS)
Rambalakos, Andreas
Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.
NASA Astrophysics Data System (ADS)
Ródenas, José
2017-11-01
All materials exposed to some neutron flux can be activated independently of the kind of the neutron source. In this study, a nuclear reactor has been considered as neutron source. In particular, the activation of control rods in a BWR is studied to obtain the doses produced around the storage pool for irradiated fuel of the plant when control rods are withdrawn from the reactor and installed into this pool. It is very important to calculate these doses because they can affect to plant workers in the area. The MCNP code based on the Monte Carlo method has been applied to simulate activation reactions produced in the control rods inserted into the reactor. Obtained activities are introduced as input into another MC model to estimate doses produced by them. The comparison of simulation results with experimental measurements allows the validation of developed models. The developed MC models have been also applied to simulate the activation of other materials, such as components of a stainless steel sample introduced into a training reactors. These models, once validated, can be applied to other situations and materials where a neutron flux can be found, not only nuclear reactors. For instance, activation analysis with an Am-Be source, neutrography techniques in both medical applications and non-destructive analysis of materials, civil engineering applications using a Troxler, analysis of materials in decommissioning of nuclear power plants, etc.
Visual improvement for bad handwriting based on Monte-Carlo method
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua
2014-03-01
A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.
TiOx deposited by magnetron sputtering: a joint modelling and experimental study
NASA Astrophysics Data System (ADS)
Tonneau, R.; Moskovkin, P.; Pflug, A.; Lucas, S.
2018-05-01
This paper presents a 3D multiscale simulation approach to model magnetron reactive sputter deposition of TiOx⩽2 at various O2 inlets and its validation against experimental results. The simulation first involves the transport of sputtered material in a vacuum chamber by means of a three-dimensional direct simulation Monte Carlo (DSMC) technique. Second, the film growth at different positions on a 3D substrate is simulated using a kinetic Monte Carlo (kMC) method. When simulating the transport of species in the chamber, wall chemistry reactions are taken into account in order to get the proper content of the reactive species in the volume. Angular and energy distributions of particles are extracted from DSMC and used for film growth modelling by kMC. Along with the simulation, experimental deposition of TiOx coatings on silicon samples placed at different positions on a curved sample holder was performed. The experimental results are in agreement with the simulated ones. For a given coater, the plasma phase hysteresis behaviour, film composition and film morphology are predicted. The used methodology can be applied to any coater and any films. This paves the way to the elaboration of a virtual coater allowing a user to predict composition and morphology of films deposited in silico.
Atomistic Simulations of Grain Boundary Pinning in CuFe Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zepeda-Ruiz, L A; Gilmer, G H; Sadigh, B
2005-05-22
The authors apply a hybrid Monte Carlo-molecular dynamics code to the study of grain boundary motion upon annealing of pure Cu and Cu with low concentrations of Fe. The hybrid simulations account for segregation and precipitation of the low solubility Fe, together with curvature driven grain boundary motion. Grain boundaries in two different systems, a {Sigma}7+U-shaped half-loop grain and a nanocrystalline sample, were found to be pinned in the presence of Fe concentrations exceeding 3%.
Monte Carlo simulations of spin transport in a strained nanoscale InGaAs field effect transistor
NASA Astrophysics Data System (ADS)
Thorpe, B.; Kalna, K.; Langbein, F. C.; Schirmer, S.
2017-12-01
Spin-based logic devices could operate at a very high speed with a very low energy consumption and hold significant promise for quantum information processing and metrology. We develop a spintronic device simulator by combining an in-house developed, experimentally verified, ensemble self-consistent Monte Carlo device simulator with spin transport based on a Bloch equation model and a spin-orbit interaction Hamiltonian accounting for Dresselhaus and Rashba couplings. It is employed to simulate a spin field effect transistor operating under externally applied voltages on a gate and a drain. In particular, we simulate electron spin transport in a 25 nm gate length In0.7Ga0.3As metal-oxide-semiconductor field-effect transistor with a CMOS compatible architecture. We observe a non-uniform decay of the net magnetization between the source and the gate and a magnetization recovery effect due to spin refocusing induced by a high electric field between the gate and the drain. We demonstrate a coherent control of the polarization vector of the drain current via the source-drain and gate voltages, and show that the magnetization of the drain current can be increased twofold by the strain induced into the channel.
Helsel, Dennis R.; Gilliom, Robert J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.
NASA Astrophysics Data System (ADS)
Ziti, S.; Aouini, S.; Labrim, H.; Bahmad, L.
2017-02-01
We study the magnetic layering transitions in a polyamidoamine (PAMAM) dendrimer nano-structure, under the effect of an external magnetic field. We examine the magnetic properties, of this model of the spin S=1 Ising ferromagnetic in real nanostructure used in several scientific domains. For T=0, we give and discuss the ground state phase diagrams. At non null temperatures, we applied the Monte Carlo simulations giving important results summarized in the form of the phase diagrams. We also analyzed the effect of varying the external magnetic field, and found the layering transitions in the polyamidoamine (PAMAM) dendrimer nano-structure.
Efficient approach to the free energy of crystals via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Navascués, G.; Velasco, E.
2015-08-01
We present a general approach to compute the absolute free energy of a system of particles with constrained center of mass based on the Monte Carlo thermodynamic coupling integral method. The version of the Frenkel-Ladd approach [J. Chem. Phys. 81, 3188 (1984)], 10.1063/1.448024, which uses a harmonic coupling potential, is recovered. Also, we propose a different choice, based on one-particle square-well coupling potentials, which is much simpler, more accurate, and free from some of the difficulties of the Frenkel-Ladd method. We apply our approach to hard spheres and compare with the standard harmonic method.
Monte Carlo simulation of a noisy quantum channel with memory.
Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco
2015-10-01
The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.
ERIC Educational Resources Information Center
Leite, Walter L.; Zuo, Youzhen
2011-01-01
Among the many methods currently available for estimating latent variable interactions, the unconstrained approach is attractive to applied researchers because of its relatively easy implementation with any structural equation modeling (SEM) software. Using a Monte Carlo simulation study, we extended and evaluated the unconstrained approach to…
A Procedure for Testing the Difference between Effect Sizes.
ERIC Educational Resources Information Center
Lambert, Richard G.; Flowers, Claudia
A special case of the homogeneity of effect size test, as applied to pairwise comparisons of standardized mean differences, was evaluated. Procedures for comparing pairs of pretest to posttest effect sizes, as well as pairs of treatment versus control group effect sizes, were examined. Monte Carlo simulation was used to generate Type I error rates…
Three-Level Analysis of Single-Case Experimental Data: Empirical Validation
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2014-01-01
One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatment's effect on the time…
Monte Carlo simulations of the dose from imaging with GE eXplore 120 micro-CT using GATE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bretin, Florian; Bahri, Mohamed Ali; Luxen, André
Purpose: Small animals are increasingly used as translational models in preclinical imaging studies involving microCT, during which the subjects can be exposed to large amounts of radiation. While the radiation levels are generally sublethal, studies have shown that low-level radiation can change physiological parameters in mice. In order to rule out any influence of radiation on the outcome of such experiments, or resulting deterministic effects in the subjects, the levels of radiation involved need to be addressed. The aim of this study was to investigate the radiation dose delivered by the GE eXplore 120 microCT non-invasively using Monte Carlo simulationsmore » in GATE and to compare results to previously obtained experimental values. Methods: Tungsten X-ray spectra were simulated at 70, 80, and 97 kVp using an analytical tool and their half-value layers were simulated for spectra validation against experimentally measured values of the physical X-ray tube. A Monte Carlo model of the microCT system was set up and four protocols that are regularly applied to live animal scanning were implemented. The computed tomography dose index (CTDI) inside a PMMA phantom was derived and multiple field of view acquisitions were simulated using the PMMA phantom, a representative mouse and rat. Results: Simulated half-value layers agreed with experimentally obtained results within a 7% error window. The CTDI ranged from 20 to 56 mGy and closely matched experimental values. Derived organ doses in mice reached 459 mGy in bones and up to 200 mGy in soft tissue organs using the highest energy protocol. Dose levels in rats were lower due to the increased mass of the animal compared to mice. The uncertainty of all dose simulations was below 14%. Conclusions: Monte Carlo simulations proved a valuable tool to investigate the 3D dose distribution in animals from microCT. Small animals, especially mice (due to their small volume), receive large amounts of radiation from the GE eXplore 120 microCT, which might alter physiological parameters in a longitudinal study setup.« less
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
Integrated layout based Monte-Carlo simulation for design arc optimization
NASA Astrophysics Data System (ADS)
Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James
2016-03-01
Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
Theoretical Studies of Liquid He-4 Near the Superfluid Transition
NASA Technical Reports Server (NTRS)
Manousakis, Efstratios
2002-01-01
We performed theoretical studies of liquid helium by applying state of the art simulation and finite-size scaling techniques. We calculated universal scaling functions for the specific heat and superfluid density for various confining geometries relevant for experiments such as the confined helium experiment and other ground based studies. We also studied microscopically how the substrate imposes a boundary condition on the superfluid order parameter as the superfluid film grows layer by layer. Using path-integral Monte Carlo, a quantum Monte Carlo simulation method, we investigated the rich phase diagram of helium monolayer, bilayer and multilayer on a substrate such as graphite. We find excellent agreement with the experimental results using no free parameters. Finally, we carried out preliminary calculations of transport coefficients such as the thermal conductivity for bulk or confined helium systems and of their scaling properties. All our studies provide theoretical support for various experimental studies in microgravity.
Towards the reliable calculation of residence time for off-lattice kinetic Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Alexander, Kathleen C.; Schuh, Christopher A.
2016-08-01
Kinetic Monte Carlo (KMC) methods have the potential to extend the accessible timescales of off-lattice atomistic simulations beyond the limits of molecular dynamics by making use of transition state theory and parallelization. However, it is a challenge to identify a complete catalog of events accessible to an off-lattice system in order to accurately calculate the residence time for KMC. Here we describe possible approaches to some of the key steps needed to address this problem. These include methods to compare and distinguish individual kinetic events, to deterministically search an energy landscape, and to define local atomic environments. When applied to the ground state ∑5(2 1 0) grain boundary in copper, these methods achieve a converged residence time, accounting for the full set of kinetically relevant events for this off-lattice system, with calculable uncertainty.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Burt, Jonathan M.
2016-01-01
There are many flows fields that span a wide range of length scales where regions of both rarefied and continuum flow exist and neither direct simulation Monte Carlo (DSMC) nor computational fluid dynamics (CFD) provide the appropriate solution everywhere. Recently, a new viscous collision limited (VCL) DSMC technique was proposed to incorporate effects of physical diffusion into collision limiter calculations to make the low Knudsen number regime normally limited to CFD more tractable for an all-particle technique. This original work had been derived for a single species gas. The current work extends the VCL-DSMC technique to gases with multiple species. Similar derivations were performed to equate numerical and physical transport coefficients. However, a more rigorous treatment of determining the mixture viscosity is applied. In the original work, consideration was given to internal energy non-equilibrium, and this is also extended in the current work to chemical non-equilibrium.
Theory, simulation and experiments for precise deflection control of radiotherapy electron beams.
Figueroa, R; Leiva, J; Moncada, R; Rojas, L; Santibáñez, M; Valente, M; Velásquez, J; Young, H; Zelada, G; Yáñez, R; Guillen, Y
2018-03-08
Conventional radiotherapy is mainly applied by linear accelerators. Although linear accelerators provide dual (electron/photon) radiation beam modalities, both of them are intrinsically produced by a megavoltage electron current. Modern radiotherapy treatment techniques are based on suitable devices inserted or attached to conventional linear accelerators. Thus, precise control of delivered beam becomes a main key issue. This work presents an integral description of electron beam deflection control as required for novel radiotherapy technique based on convergent photon beam production. Theoretical and Monte Carlo approaches were initially used for designing and optimizing device´s components. Then, dedicated instrumentation was developed for experimental verification of electron beam deflection due to the designed magnets. Both Monte Carlo simulations and experimental results support the reliability of electrodynamics models used to predict megavoltage electron beam control. Copyright © 2018 Elsevier Ltd. All rights reserved.
Thermodynamic properties of water in confined environments: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Gladovic, Martin; Bren, Urban; Urbic, Tomaž
2018-05-01
Monte Carlo simulations of Mercedes-Benz water in a crowded environment were performed. The simulated systems are representative of both composite, porous or sintered materials and living cells with typical matrix packings. We studied the influence of overall temperature as well as the density and size of matrix particles on water density, particle distributions, hydrogen bond formation and thermodynamic quantities. Interestingly, temperature and space occupancy of matrix exhibit a similar effect on water properties following the competition between the kinetic and the potential energy of the system, whereby temperature increases the kinetic and matrix packing decreases the potential contribution. A novel thermodynamic decomposition approach was applied to gain insight into individual contributions of different types of inter-particle interactions. This decomposition proved to be useful and in good agreement with the total thermodynamic quantities especially at higher temperatures and matrix packings, where higher-order potential-energy mixing terms lose their importance.
Molecular gas dynamics applied to low-thrust propulsion
NASA Astrophysics Data System (ADS)
Zelesnik, Donna; Penko, Paul F.; Boyd, Iain D.
1993-11-01
The Direct Simulation Monte Carlo method is currently being applied to study flowfields of small thrusters, including both the internal nozzle and the external plume flow. The DSMC method is employed because of its inherent ability to capture nonequilibrium effects and proper boundary physics in low-density flow that are not readily obtained by continuum methods. Accurate prediction of both the internal and external nozzle flow is important in determining plume expansion which, in turn, bears directly on impingement and contamination effects.
Molecular gas dynamics applied to low-thrust propulsion
NASA Technical Reports Server (NTRS)
Zelesnik, Donna; Penko, Paul F.; Boyd, Iain D.
1993-01-01
The Direct Simulation Monte Carlo method is currently being applied to study flowfields of small thrusters, including both the internal nozzle and the external plume flow. The DSMC method is employed because of its inherent ability to capture nonequilibrium effects and proper boundary physics in low-density flow that are not readily obtained by continuum methods. Accurate prediction of both the internal and external nozzle flow is important in determining plume expansion which, in turn, bears directly on impingement and contamination effects.
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and themore » SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.« less
Absolute dose calculations for Monte Carlo simulations of radiotherapy beams
NASA Astrophysics Data System (ADS)
Popescu, I. A.; Shaw, C. P.; Zavgorodni, S. F.; Beckham, W. A.
2005-07-01
Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 × 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 × 1013 ± 1.0% electrons incident on the target and a total dose of 20.87 cGy ± 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piao, J; PLA 302 Hospital, Beijing; Xu, S
2016-06-15
Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less
2013-07-01
also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32 B. MCNP PHYSICS OPTIONS ......................................................................................... 33 C. HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. Copyright © 2013 Elsevier B.V. All rights reserved.
Roze, S; Liens, D; Palmer, A; Berger, W; Tucker, D; Renaudin, C
2006-12-01
The aim of this study was to describe a health economic model developed to project lifetime clinical and cost outcomes of lipid-modifying interventions in patients not reaching target lipid levels and to assess the validity of the model. The internet-based, computer simulation model is made up of two decision analytic sub-models, the first utilizing Monte Carlo simulation, and the second applying Markov modeling techniques. Monte Carlo simulation generates a baseline cohort for long-term simulation by assigning an individual lipid profile to each patient, and applying the treatment effects of interventions under investigation. The Markov model then estimates the long-term clinical (coronary heart disease events, life expectancy, and quality-adjusted life expectancy) and cost outcomes up to a lifetime horizon, based on risk equations from the Framingham study. Internal and external validation analyses were performed. The results of the model validation analyses, plotted against corresponding real-life values from Framingham, 4S, AFCAPS/TexCAPS, and a meta-analysis by Gordon et al., showed that the majority of values were close to the y = x line, which indicates a perfect fit. The R2 value was 0.9575 and the gradient of the regression line was 0.9329, both very close to the perfect fit (= 1). Validation analyses of the computer simulation model suggest the model is able to recreate the outcomes from published clinical studies and would be a valuable tool for the evaluation of new and existing therapy options for patients with persistent dyslipidemia.
Smart darting diffusion Monte Carlo: Applications to lithium ion-Stockmayer clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, H. M.; Jake, L. C.; Curotto, E., E-mail: curotto@arcadia.edu
2016-05-07
In a recent investigation [K. Roberts et al., J. Chem. Phys. 136, 074104 (2012)], we have shown that, for a sufficiently complex potential, the Diffusion Monte Carlo (DMC) random walk can become quasiergodic, and we have introduced smart darting-like moves to improve the sampling. In this article, we systematically characterize the bias that smart darting moves introduce in the estimate of the ground state energy of a bosonic system. We then test a simple approach to eliminate completely such bias from the results. The approach is applied for the determination of the ground state of lithium ion-n–dipoles clusters in themore » n = 8–20 range. For these, the smart darting diffusion Monte Carlo simulations find the same ground state energy and mixed-distribution as the traditional approach for n < 14. In larger systems we find that while the ground state energies agree quantitatively with or without smart darting moves, the mixed-distributions can be significantly different. Some evidence is offered to conclude that introducing smart darting-like moves in traditional DMC simulations may produce a more reliable ground state mixed-distribution.« less
Peter, Emanuel K; Pivkin, Igor V; Shea, Joan-Emma
2015-04-14
In Monte-Carlo simulations of protein folding, pathways and folding times depend on the appropriate choice of the Monte-Carlo move or process path. We developed a generalized set of process paths for a hybrid kinetic Monte Carlo-Molecular dynamics algorithm, which makes use of a novel constant time-update and allows formation of α-helical and β-stranded secondary structures. We apply our new algorithm to the folding of 3 different proteins: TrpCage, GB1, and TrpZip4. All three systems are seen to fold within the range of the experimental folding times. For the β-hairpins, we observe that loop formation is the rate-determining process followed by collapse and formation of the native core. Cluster analysis of both peptides reveals that GB1 folds with equal likelihood along a zipper or a hydrophobic collapse mechanism, while TrpZip4 follows primarily a zipper pathway. The difference observed in the folding behavior of the two proteins can be attributed to the different arrangements of their hydrophobic core, strongly packed, and dry in case of TrpZip4, and partially hydrated in the case of GB1.
Monte Carlo simulation of random, porous (foam) structures for neutron detection
NASA Astrophysics Data System (ADS)
Reichenberger, Michael A.; Fronk, Ryan G.; Shultis, J. Kenneth; Roberts, Jeremy A.; Edwards, Nathaniel S.; Stevenson, Sarah R.; Tiner, Christopher N.; McGregor, Douglas S.
2017-01-01
Porous media incorporating highly neutron-sensitive materials are of interest for use in the development of neutron detectors. Previous studies have shown experimentally the feasibility of 6LiF-saturated, multi-layered detectors; however, the random geometry of porous materials has limited the effectiveness of simulation efforts. The results of scatterless neutron transport and subsequent charged reaction product ion energy deposition are reported here using a novel Monte Carlo method and compared to results obtained by MCNP6. This new Dynamic Path Generation (DPG) Monte Carlo method was developed in order to overcome the complexities of modeling a random porous geometry in MCNP6. The DPG method is then applied to determine the optimal coating thickness for 10B4C-coated reticulated vitreous-carbon (RVC) foams. The optimal coating thickness for 4.1275 cm-thick 10B4C-coated reticulated vitreous carbon foams with porosities of 5, 10, 20, 30, 45, and 80 pores per inch (PPI) were determined for ionizing gas pressures of 1.0 and 2.8 atm. A simulated, maximum, intrinsic thermal-neutron detection efficiency of 62.8±0.25% was predicted for an 80 PPI RVC foam with a 0.2 μm thick coating of 10B4C, for a lower level discriminator setting of 75 keV and an argon pressure of 2.8 atm.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Data-Division-Specific Robustness and Power of Randomization Tests for ABAB Designs
ERIC Educational Resources Information Center
Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick
2010-01-01
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Lattice QCD in rotating frames.
Yamamoto, Arata; Hirono, Yuji
2013-08-23
We formulate lattice QCD in rotating frames to study the physics of QCD matter under rotation. We construct the lattice QCD action with the rotational metric and apply it to the Monte Carlo simulation. As the first application, we calculate the angular momenta of gluons and quarks in the rotating QCD vacuum. This new framework is useful to analyze various rotation-related phenomena in QCD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertholon, François; Harant, Olivier; Bourlon, Bertrand
This article introduces a joined Bayesian estimation of gas samples issued from a gas chromatography column (GC) coupled with a NEMS sensor based on Giddings Eyring microscopic molecular stochastic model. The posterior distribution is sampled using a Monte Carlo Markov Chain and Gibbs sampling. Parameters are estimated using the posterior mean. This estimation scheme is finally applied on simulated and real datasets using this molecular stochastic forward model.
Study of the effect of short ranged ordering on the magnetism in FeCr alloys
NASA Astrophysics Data System (ADS)
Jena, Ambika Prasad; Sanyal, Biplab; Mookerjee, Abhijit
2014-01-01
For the study of magnetism in systems where the local environment plays an important role, we propose a marriage between the Monte Carlo simulation and Zunger's special quasi-random structures. We apply this technique on disordered FeCr alloys and show that our estimates of the transition temperature is in good agreement with earlier experiments.
The difference between LSMC and replicating portfolio in insurance liability modeling.
Pelsser, Antoon; Schweizer, Janina
2016-01-01
Solvency II requires insurers to calculate the 1-year value at risk of their balance sheet. This involves the valuation of the balance sheet in 1 year's time. As for insurance liabilities, closed-form solutions to their value are generally not available, insurers turn to estimation procedures. While pure Monte Carlo simulation set-ups are theoretically sound, they are often infeasible in practice. Therefore, approximation methods are exploited. Among these, least squares Monte Carlo (LSMC) and portfolio replication are prominent and widely applied in practice. In this paper, we show that, while both are variants of regression-based Monte Carlo methods, they differ in one significant aspect. While the replicating portfolio approach only contains an approximation error, which converges to zero in the limit, in LSMC a projection error is additionally present, which cannot be eliminated. It is revealed that the replicating portfolio technique enjoys numerous advantages and is therefore an attractive model choice.
NASA Astrophysics Data System (ADS)
Xiong, Chuan; Shi, Jiancheng
2014-01-01
To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing.
Computer simulations of equilibrium magnetization and microstructure in magnetic fluids
NASA Astrophysics Data System (ADS)
Rosa, A. P.; Abade, G. C.; Cunha, F. R.
2017-09-01
In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole-dipole interactions, the standard method of minimum image is both accurate and computationally efficient. Otherwise, lattice sums of magnetic particle interactions are required to accelerate convergence of the equilibrium magnetization. The accuracy of the numerical code is also quantitatively verified by comparing the magnetization obtained from numerical results with asymptotic predictions of high order in the particle volume fraction, in the presence of dipole-dipole interactions. In addition, Brownian Dynamics simulations are used in order to examine magnetization relaxation of a ferrofluid and to calculate the magnetic relaxation time as a function of the magnetic particle interaction strength for a given particle volume fraction and a non-dimensional applied field. The simulations of magnetization relaxation have shown the existence of a critical value of the dipole-dipole interaction parameter. For strength of the interactions below the critical value at a given particle volume fraction, the magnetic relaxation time is close to the Brownian relaxation time and the suspension has no appreciable memory. On the other hand, for strength of dipole interactions beyond its critical value, the relaxation time increases exponentially with the strength of dipole-dipole interaction. Although we have considered equilibrium conditions, the obtained results have far-reaching implications for the analysis of magnetic suspensions under external flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardiansyah, D.; Haryanto, F.; Male, S.
2014-09-30
Prism is a non-commercial Radiotherapy Treatment Planning System (RTPS) develop by Ira J. Kalet from Washington University. Inhomogeneity factor is included in Prism TPS dose calculation. The aim of this study is to investigate the sensitivity of dose calculation on Prism using Monte Carlo simulation. Phase space source from head linear accelerator (LINAC) for Monte Carlo simulation is implemented. To achieve this aim, Prism dose calculation is compared with EGSnrc Monte Carlo simulation. Percentage depth dose (PDD) and R50 from both calculations are observed. BEAMnrc is simulated electron transport in LINAC head and produced phase space file. This file ismore » used as DOSXYZnrc input to simulated electron transport in phantom. This study is started with commissioning process in water phantom. Commissioning process is adjusted Monte Carlo simulation with Prism RTPS. Commissioning result is used for study of inhomogeneity phantom. Physical parameters of inhomogeneity phantom that varied in this study are: density, location and thickness of tissue. Commissioning result is shown that optimum energy of Monte Carlo simulation for 6 MeV electron beam is 6.8 MeV. This commissioning is used R50 and PDD with Practical length (R{sub p}) as references. From inhomogeneity study, the average deviation for all case on interest region is below 5 %. Based on ICRU recommendations, Prism has good ability to calculate the radiation dose in inhomogeneity tissue.« less
NASA Astrophysics Data System (ADS)
Zhang, Siyuan; Li, Liang; Li, Ruizhe; Chen, Zhiqiang
2017-11-01
We present the design concept and initial simulations for a polychromatic full-field fan-beam x-ray fluorescence computed tomography (XFCT) device with pinhole collimators and linear-array photon counting detectors. The phantom is irradiated by a fan-beam polychromatic x-ray source filtered by copper. Fluorescent photons are stimulated and then collected by two linear-array photon counting detectors with pinhole collimators. The Compton scatter correction and the attenuation correction are applied in the data processing, and the maximum-likelihood expectation maximization algorithm is applied for the image reconstruction of XFCT. The physical modeling of the XFCT imaging system was described, and a set of rapid Monte Carlo simulations was carried out to examine the feasibility and sensitivity of the XFCT system. Different concentrations of gadolinium (Gd) and gold (Au) solutions were used as contrast agents in simulations. Results show that 0.04% of Gd and 0.065% of Au can be well reconstructed with the full scan time set at 6 min. Compared with using the XFCT system with a pencil-beam source or a single-pixel detector, using a full-field fan-beam XFCT device with linear-array detectors results in significant scanning time reduction and may satisfy requirements of rapid imaging, such as in vivo imaging experiments.
Verification of unfold error estimates in the UFO code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less
Random number generators tested on quantum Monte Carlo simulations.
Hongo, Kenta; Maezono, Ryo; Miura, Kenichi
2010-08-01
We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.
PBMC: Pre-conditioned Backward Monte Carlo code for radiative transport in planetary atmospheres
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2017-08-01
PBMC (Pre-Conditioned Backward Monte Carlo) solves the vector Radiative Transport Equation (vRTE) and can be applied to planetary atmospheres irradiated from above. The code builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. In accounting for the polarization in the sampling of photon propagation directions and pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions, PBMC avoids the unstable and biased solutions of classical BMC algorithms for conservative, optically-thick, strongly-polarizing media such as Rayleigh atmospheres.
NASA Astrophysics Data System (ADS)
Persano Adorno, Dominique; Pizzolato, Nicola; Fazio, Claudio
2015-09-01
Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations.
Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations.
Di Staso, G; Clercx, H J H; Succi, S; Toschi, F
2016-11-13
Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, J; Park, S; Jeong, J
Purpose: In particle therapy and radiobiology, the investigation of mechanisms leading to the death of target cancer cells induced by ionising radiation is an active field of research. Recently, several studies based on Monte Carlo simulation codes have been initiated in order to simulate physical interactions of ionising particles at cellular scale and in DNA. Geant4-DNA is the one of them; it is an extension of the general purpose Geant4 Monte Carlo simulation toolkit for the simulation of physical interactions at sub-micrometre scale. In this study, we present Geant4-DNA Monte Carlo simulations for the prediction of DNA strand breakage usingmore » a geometrical modelling of DNA structure. Methods: For the simulation of DNA strand breakage, we developed a specific DNA geometrical structure. This structure consists of DNA components, such as the deoxynucleotide pairs, the DNA double helix, the nucleosomes and the chromatin fibre. Each component is made of water because the cross sections models currently available in Geant4-DNA for protons apply to liquid water only. Also, at the macroscopic-scale, protons were generated with various energies available for proton therapy at the National Cancer Center, obtained using validated proton beam simulations developed in previous studies. These multi-scale simulations were combined for the validation of Geant4-DNA in radiobiology. Results: In the double helix structure, the deposited energy in a strand allowed to determine direct DNA damage from physical interaction. In other words, the amount of dose and frequency of damage in microscopic geometries was related to direct radiobiological effect. Conclusion: In this report, we calculated the frequency of DNA strand breakage using Geant4- DNA physics processes for liquid water. This study is now on-going in order to develop geometries which use realistic DNA material, instead of liquid water. This will be tested as soon as cross sections for DNA material become available in Geant4-DNA.« less
NASA Astrophysics Data System (ADS)
Boisson, F.; Wimberley, C. J.; Lehnert, W.; Zahra, D.; Pham, T.; Perkins, G.; Hamze, H.; Gregoire, M.-C.; Reilhac, A.
2013-10-01
Monte Carlo-based simulation of positron emission tomography (PET) data plays a key role in the design and optimization of data correction and processing methods. Our first aim was to adapt and configure the PET-SORTEO Monte Carlo simulation program for the geometry of the widely distributed Inveon PET preclinical scanner manufactured by Siemens Preclinical Solutions. The validation was carried out against actual measurements performed on the Inveon PET scanner at the Australian Nuclear Science and Technology Organisation in Australia and at the Brain & Mind Research Institute and by strictly following the NEMA NU 4-2008 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction and count rates, image quality and Derenzo phantom studies. Results showed that PET-SORTEO reliably reproduces the performances of this Inveon preclinical system. In addition, imaging studies showed that the PET-SORTEO simulation program provides raw data for the Inveon scanner that can be fully corrected and reconstructed using the same programs as for the actual data. All correction techniques (attenuation, scatter, randoms, dead-time, and normalization) can be applied on the simulated data leading to fully quantitative reconstructed images. In the second part of the study, we demonstrated its ability to generate fast and realistic biological studies. PET-SORTEO is a workable and reliable tool that can be used, in a classical way, to validate and/or optimize a single PET data processing step such as a reconstruction method. However, we demonstrated that by combining a realistic simulated biological study ([11C]Raclopride here) involving different condition groups, simulation allows one also to assess and optimize the data correction, reconstruction and data processing line flow as a whole, specifically for each biological study, which is our ultimate intent.
Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.
Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M
2002-10-01
The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system
NASA Technical Reports Server (NTRS)
Turner, Travis L.
1991-01-01
A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.
Chiral magnetic effect in lattice QCD with a chiral chemical potential.
Yamamoto, Arata
2011-07-15
We perform a first lattice QCD simulation including a two-flavor dynamical fermion with a chiral chemical potential. Because the chiral chemical potential gives rise to no sign problem, we can exactly analyze a chirally imbalanced QCD matter by Monte Carlo simulation. By applying an external magnetic field to this system, we obtain a finite induced current along the magnetic field, which corresponds to the chiral magnetic effect. The obtained induced current is proportional to the magnetic field and to the chiral chemical potential, which is consistent with an analytical prediction.
Gereben, Orsolya; Pusztai, László
2013-11-13
Series of flexible molecule reverse Monte Carlo calculations, using bonding and non-bonding interatomic potential functions (FMP-RMC), were performed starting from previous molecular dynamics results that had applied the OPLS-AA and EncadS force fields. During RMC modeling, the experimental x-ray total scattering structure factor was approached. The discrepancy between experimental and calculated structure factors, in comparison with the molecular dynamics results, decreased substantially in each case. The room temperature liquid structure of bis(methylthio)methane is excellently described by the FMP-RMC simulation that applied the EncadS force field parameters. The main conformer was found to be AG with 55.2%, followed by 37.2% of G(+)G(+) (G(-)G(-)) and 7.6% of AA; the stability of the G(+)G(+) (G(-)G(-)) conformer is most probably caused by the anomer effect. The liquid structure of diethyl sulfide can be best described by applying the OPLS-AA force field parameters during FMP-RMC simulation, although in this case the force field parameters were found to be not fully compatible with experimental data. Here, the two main conformers are AG (50.6%) and the AA (40%). In addition to findings on the actual real systems, a fairly detailed comparison between traditional and FMP-RMC methodology is provided.
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taleei, Reza; Guan, Fada; Peeler, Chris
Purpose: {sup 3}He ions may hold great potential for clinical therapy because of both their physical and biological properties. In this study, the authors investigated the physical properties, i.e., the depth-dose curves from primary and secondary particles, and the energy distributions of helium ({sup 3}He) ions. A relative biological effectiveness (RBE) model was applied to assess the biological effectiveness on survival of multiple cell lines. Methods: In light of the lack of experimental measurements and cross sections, the authors used Monte Carlo methods to study the energy deposition of {sup 3}He ions. The transport of {sup 3}He ions in watermore » was simulated by using three Monte Carlo codes—FLUKA, GEANT4, and MCNPX—for incident beams with Gaussian energy distributions with average energies of 527 and 699 MeV and a full width at half maximum of 3.3 MeV in both cases. The RBE of each was evaluated by using the repair-misrepair-fixation model. In all of the simulations with each of the three Monte Carlo codes, the same geometry and primary beam parameters were used. Results: Energy deposition as a function of depth and energy spectra with high resolution was calculated on the central axis of the beam. Secondary proton dose from the primary {sup 3}He beams was predicted quite differently by the three Monte Carlo systems. The predictions differed by as much as a factor of 2. Microdosimetric parameters such as dose mean lineal energy (y{sub D}), frequency mean lineal energy (y{sub F}), and frequency mean specific energy (z{sub F}) were used to characterize the radiation beam quality at four depths of the Bragg curve. Calculated RBE values were close to 1 at the entrance, reached on average 1.8 and 1.6 for prostate and head and neck cancer cell lines at the Bragg peak for both energies, but showed some variations between the different Monte Carlo codes. Conclusions: Although the Monte Carlo codes provided different results in energy deposition and especially in secondary particle production (most of the differences between the three codes were observed close to the Bragg peak, where the energy spectrum broadens), the results in terms of RBE were generally similar.« less
The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Chen, Jundong
2018-03-01
Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.
Gray: a ray tracing-based Monte Carlo simulator for PET
NASA Astrophysics Data System (ADS)
Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.
2018-05-01
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
NASA Astrophysics Data System (ADS)
Ong, J. S. L.; Charin, C.; Leong, J. H.
2017-12-01
Avalanche photodiodes (APDs) with steep electric field gradients generally have low excess noise that arises from carrier multiplication within the internal gain of the devices, and the Monte Carlo (MC) method is among popular device simulation tools for such devices. However, there are few articles relating to carrier trajectory modeling in MC models for such devices. In this work, a set of electric-field-gradient-dependent carrier trajectory tracking equations are developed and used to update the positions of carriers along the path during Simple-band Monte Carlo (SMC) simulations of APDs with non-uniform electric fields. The mean gain and excess noise results obtained from the SMC model employing these equations show good agreement with the results reported for a series of silicon diodes, including a p+n diode with steep electric field gradients. These results confirm the validity and demonstrate the feasibility of the trajectory tracking equations applied in SMC models for simulating mean gain and excess noise in APDs with non-uniform electric fields. Also, the simulation results of mean gain, excess noise, and carrier ionization positions obtained from the SMC model of this work agree well with those of the conventional SMC model employing the concept of a uniform electric field within a carrier free-flight. These results demonstrate that the electric field variation within a carrier free-flight has an insignificant effect on the predicted mean gain and excess noise results. Therefore, both the SMC model of this work and the conventional SMC model can be used to predict the mean gain and excess noise in APDs with highly non-uniform electric fields.
First-Principles Monte Carlo Simulations of Reaction Equilibria in Compressed Vapors
2016-01-01
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gas equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO2 and N2O in mole fractions approaching 1%, whereas N3 and O3 are not observed. The equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data. PMID:27413785
Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments
Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria
2015-01-01
Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162
MASTOS: Mammography Simulation Tool for design Optimization Studies.
Spyrou, G; Panayiotakis, G; Tzanakos, G
2000-01-01
Mammography is a high quality imaging technique for the detection of breast lesions, which requires dedicated equipment and optimum operation. The design parameters of a mammography unit have to be decided and evaluated before the construction of such a high cost of apparatus. The optimum operational parameters also must be defined well before the real breast examination. MASTOS is a software package, based on Monte Carlo methods, that is designed to be used as a simulation tool in mammography. The input consists of the parameters that have to be specified when using a mammography unit, and also the parameters specifying the shape and composition of the breast phantom. In addition, the input may specify parameters needed in the design of a new mammographic apparatus. The main output of the simulation is a mammographic image and calculations of various factors that describe the image quality. The Monte Carlo simulation code is PC-based and is driven by an outer shell of a graphical user interface. The entire software package is a simulation tool for mammography and can be applied in basic research and/or in training in the fields of medical physics and biomedical engineering as well as in the performance evaluation of new designs of mammography units and in the determination of optimum standards for the operational parameters of a mammography unit.
Lachet, V; Teuler, J-M; Rousseau, B
2015-01-08
A classical all-atoms force field for molecular simulations of hydrofluorocarbons (HFCs) has been developed. Lennard-Jones force centers plus point charges are used to represent dispersion-repulsion and electrostatic interactions. Parametrization of this force field has been performed iteratively using three target properties of pentafluorobutane: the quantum energy of an isolated molecule, the dielectric constant in the liquid phase, and the compressed liquid density. The accuracy and transferability of this new force field has been demonstrated through the simulation of different thermophysical properties of several fluorinated compounds, showing significant improvements compared to existing models. This new force field has been applied to study solubilities of several gases in poly(vinylidene fluoride) (PVDF) above the melting temperature of this polymer. The solubility of CH4, CO2, H2S, H2, N2, O2, and H2O at infinite dilution has been computed using test particle insertions in the course of a NpT hybrid Monte Carlo simulation. For CH4, CO2, and their mixtures, some calculations beyond the Henry regime have also been performed using hybrid Monte Carlo simulations in the osmotic ensemble, allowing both swelling and solubility determination. An ideal mixing behavior is observed, with identical solubility coefficients in the mixtures and in pure gas systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
Mobit, P
2002-01-01
The energy responses of LiF-TLDs irradiated in megavoltage electron and photon beams have been determined experimentally by many investigators over the past 35 years but the results vary considerably. General cavity theory has been used to model some of the experimental findings but the predictions of these cavity theories differ from each other and from measurements by more than 13%. Recently, two groups or investigators using Monte Carlo simulations and careful experimental techniques showed that the energy response of 1 mm or 2 mm thick LiF-TLD irradiated by megavoltage photon and electron beams is not more than 5% less than unity for low-Z phantom materials like water or Perspex. However, when the depth of irradiation is significantly different from dmax and the TLD size is more than 5 mm, then the energy response is up to 12% less than unity for incident electron beams. Monte Carlo simulations of some of the experiments reported in the literature showed that some of the contradictory experimental results are reproducible with Monte Carlo simulations. Monte Carlo simulations show that the energy response of LiF-TLDs depends on the size of detector used in electron beams, the depth of irradiation and the incident electron energy. Other differences can be attributed to absolute dose determination and precision of the TL technique. Monte Carlo simulations have also been used to evaluate some of the published general cavity theories. The results show that some of the parameters used to evaluate Burlin's general cavity theory are wrong by factor of 3. Despite this, the estimation of the energy response for most clinical situations using Burlin's cavity equation agrees with Monte Carlo simulations within 1%.
Probability of misclassifying biological elements in surface waters.
Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna
2017-11-24
Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Sun, JinWei; Rolfe, Peter
2010-12-01
Near-infrared spectroscopy (NIRS) can be used as the basis of non-invasive neuroimaging that may allow the measurement of haemodynamic changes in the human brain evoked by applied stimuli. Since this technique is very sensitive, physiological interference arising from the cardiac cycle and breathing can significantly affect the signal quality. Such interference is difficult to remove by conventional techniques because it occurs not only in the extracerebral layer but also in the brain tissue itself. Previous work on this problem employing temporal filtering, spatial filtering, and adaptive filtering have exhibited good performance for recovering brain activity data in evoked response studies. However, in this study, we present a time-frequency adaptive method for physiological interference reduction based on the combination of empirical mode decomposition (EMD) and Hilbert spectral analysis (HSA). Monte Carlo simulations based on a five-layered slab model of a human adult head were implemented to evaluate our methodology. We applied an EMD algorithm to decompose the NIRS time series derived from Monte Carlo simulations into a series of intrinsic mode functions (IMFs). In order to identify the IMFs associated with symmetric interference, the extracted components were then Hilbert transformed from which the instantaneous frequencies could be acquired. By reconstructing the NIRS signal by properly selecting IMFs, we determined that the evoked brain response is effectively filtered out with even higher signal-to-noise ratio (SNR). The results obtained demonstrated that EMD, combined with HSA, can effectively separate, identify and remove the contamination from the evoked brain response obtained with NIRS using a simple single source-detector pair.
Monte Carlo simulation of aorta autofluorescence
NASA Astrophysics Data System (ADS)
Kuznetsova, A. A.; Pushkareva, A. E.
2016-08-01
Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.
Extended time-interval analysis
NASA Astrophysics Data System (ADS)
Fynbo, H. O. U.; Riisager, K.
2014-01-01
Several extensions of the halflife analysis method recently suggested by Horvat and Hardy are put forward. Goodness-of-fit testing is included, and the method is extended to cases where more information is available for each decay event which allows applications also for e.g. γ decay data. The results are tested with Monte Carlo simulations and are applied to the decays of 64Cu and 56Mn.
2011-09-01
Structure Evolution During Sintering From [19]. ...................................20 Figure 10. Ising Model Configuration With Eight Nearest Neighbors...INTRODUCTION A. MOTIVATION The ability to fabricate structural components from metals with a fine (micron- sized), controlled grain size is one of the...hallmarks of modern, structural metallurgy. Powder metallurgy, in particular, consists of powder manufacture, powder blending, compacting, and sintering
Physical Principle for Generation of Randomness
NASA Technical Reports Server (NTRS)
Zak, Michail
2009-01-01
A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
The Monte Carlo Method. Popular Lectures in Mathematics.
ERIC Educational Resources Information Center
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
Fixed forced detection for fast SPECT Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.
2018-03-01
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
Fixed forced detection for fast SPECT Monte-Carlo simulation.
Cajgfinger, T; Rit, S; Létang, J M; Halty, A; Sarrut, D
2018-03-02
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
Monte Carlo simulation: Its status and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murtha, J.A.
1997-04-01
Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less
NASA Technical Reports Server (NTRS)
Gallis, Michael A.; LeBeau, Gerald J.; Boyles, Katie A.
2003-01-01
The Direct Simulation Monte Carlo method was used to provide 3-D simulations of the early entry phase of the Shuttle Orbiter. Undamaged and damaged scenarios were modeled to provide calibration points for engineering "bridging function" type of analysis. Currently the simulation technology (software and hardware) are mature enough to allow realistic simulations of three dimensional vehicles.
Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Salis, Howard; Kaznessis, Yiannis N
2005-12-01
Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.
Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.
Mathiowetz, A. M.; Goddard, W. A.
1995-01-01
Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885
NASA Astrophysics Data System (ADS)
Vasallo, B. G.; González, T.; Talbo, V.; Lechaux, Y.; Wichmann, N.; Bollaert, S.; Mateos, J.
2018-01-01
III-V Impact-ionization (II) metal-oxide-semiconductor FETs (I-MOSFETs) and tunnel FETs (TFETs) are being explored as promising devices for low-power digital applications. To assist the development of these devices from the physical point of view, a Monte Carlo (MC) model which includes impact ionization processes and band-to-band tunneling is presented. The MC simulator reproduces the I-V characteristics of experimental ungated In0.53Ga0.47As 100 nm PIN diodes, in which tunneling emerges for lower applied voltages than impact ionization events, thus being appropriate for TFETs. When the structure is enlarged up to 200 nm, the ON-state is achieved by means of impact ionization processes; however, the necessary applied voltage is higher, with the consequent drawback for low-power applications. In InAs PIN ungated structures, the onset of both impact ionization processes and band-to-band tunneling takes place for similar applied voltages, lower than 1 V; thus they are suitable for the design of low-power I-MOSFETs.
Statistical Analysis of the Uncertainty in Pre-Flight Aerodynamic Database of a Hypersonic Vehicle
NASA Astrophysics Data System (ADS)
Huh, Lynn
The objective of the present research was to develop a new method to derive the aerodynamic coefficients and the associated uncertainties for flight vehicles via post- flight inertial navigation analysis using data from the inertial measurement unit. Statistical estimates of vehicle state and aerodynamic coefficients are derived using Monte Carlo simulation. Trajectory reconstruction using the inertial navigation system (INS) is a simple and well used method. However, deriving realistic uncertainties in the reconstructed state and any associated parameters is not so straight forward. Extended Kalman filters, batch minimum variance estimation and other approaches have been used. However, these methods generally depend on assumed physical models, assumed statistical distributions (usually Gaussian) or have convergence issues for non-linear problems. The approach here assumes no physical models, is applicable to any statistical distribution, and does not have any convergence issues. The new approach obtains the statistics directly from a sufficient number of Monte Carlo samples using only the generally well known gyro and accelerometer specifications and could be applied to the systems of non-linear form and non-Gaussian distribution. When redundant data are available, the set of Monte Carlo simulations are constrained to satisfy the redundant data within the uncertainties specified for the additional data. The proposed method was applied to validate the uncertainty in the pre-flight aerodynamic database of the X-43A Hyper-X research vehicle. In addition to gyro and acceleration data, the actual flight data include redundant measurements of position and velocity from the global positioning system (GPS). The criteria derived from the blend of the GPS and INS accuracy was used to select valid trajectories for statistical analysis. The aerodynamic coefficients were derived from the selected trajectories by either direct extraction method based on the equations in dynamics, or by the inquiry of the pre-flight aerodynamic database. After the application of the proposed method to the case of the X-43A Hyper-X research vehicle, it was found that 1) there were consistent differences in the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis, 2) the pre-flight estimation of the pitching moment coefficients was significantly different from the post-flight analysis, 3) the type of distribution of the states from the Monte Carlo simulation were affected by that of the perturbation parameters, 4) the uncertainties in the pre-flight model were overestimated, 5) the range where the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis are in closest agreement is between Mach *.* and *.* and more data points may be needed between Mach * and ** in the pre-flight aerodynamic database, 6) selection criterion for valid trajectories from the Monte Carlo simulations was mostly driven by the horizontal velocity error, 7) the selection criterion must be based on reasonable model to ensure the validity of the statistics from the proposed method, and 8) the results from the proposed method applied to the two different flights with the identical geometry and similar flight profile were consistent.
Wang, Chao; Guo, Xiao-Jing; Xu, Jin-Fang; Wu, Cheng; Sun, Ya-Lin; Ye, Xiao-Fei; Qian, Wei; Ma, Xiu-Qiang; Du, Wen-Min; He, Jia
2012-01-01
The detection of signals of adverse drug events (ADEs) has increased because of the use of data mining algorithms in spontaneous reporting systems (SRSs). However, different data mining algorithms have different traits and conditions for application. The objective of our study was to explore the application of association rule (AR) mining in ADE signal detection and to compare its performance with that of other algorithms. Monte Carlo simulation was applied to generate drug-ADE reports randomly according to the characteristics of SRS datasets. Thousand simulated datasets were mined by AR and other algorithms. On average, 108,337 reports were generated by the Monte Carlo simulation. Based on the predefined criterion that 10% of the drug-ADE combinations were true signals, with RR equaling to 10, 4.9, 1.5, and 1.2, AR detected, on average, 284 suspected associations with a minimum support of 3 and a minimum lift of 1.2. The area under the receiver operating characteristic (ROC) curve of the AR was 0.788, which was equivalent to that shown for other algorithms. Additionally, AR was applied to reports submitted to the Shanghai SRS in 2009. Five hundred seventy combinations were detected using AR from 24,297 SRS reports, and they were compared with recognized ADEs identified by clinical experts and various other sources. AR appears to be an effective method for ADE signal detection, both in simulated and real SRS datasets. The limitations of this method exposed in our study, i.e., a non-uniform thresholds setting and redundant rules, require further research.
Liu, Y; Zheng, Y
2012-06-01
Accurate determination of proton dosimetric effect for tissue heterogeneity is critical in proton therapy. Proton beams have finite range and consequently tissue heterogeneity plays a more critical role in proton therapy. The purpose of this study is to investigate the tissue heterogeneity effect in proton dosimetry based on anatomical-based Monte Carlo simulation using animal tissues. Animal tissues including a pig head and beef bulk were used in this study. Both pig head and beef were scanned using a GE CT scanner with 1.25 mm slice thickness. A treatment plan was created, using the CMS XiO treatment planning system (TPS) with a single proton spread-out-Bragg-peak beam (SOBP). Radiochromic films were placed at the distal falloff region. Image guidance was used to align the phantom before proton beams were delivered according to the treatment plan. The same two CT sets were converted to Monte Carlo simulation model. The Monte Carlo simulated dose calculations with/without tissue omposition were compared to TPS calculations and measurements. Based on the preliminary comparison, at the center of SOBP plane, the Monte Carlo simulation dose without tissue composition agreed generally well with TPS calculation. In the distal falloff region, the dose difference was large, and about 2 mm isodose line shift was observed with the consideration of tissue composition. The detailed comparison of dose distributions between Monte Carlo simulation, TPS calculations and measurements is underway. Accurate proton dose calculations are challenging in proton treatment planning for heterogeneous tissues. Tissue heterogeneity and tissue composition may lead to isodose line shifts up to a few millimeters in the distal falloff region. By simulating detailed particle transport and energy deposition, Monte Carlo simulations provide a verification method in proton dose calculation where inhomogeneous tissues are present. © 2012 American Association of Physicists in Medicine.
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Patti, Alessandro; Cuetos, Alejandro
2012-07-01
We report on the diffusion of purely repulsive and freely rotating colloidal rods in the isotropic, nematic, and smectic liquid crystal phases to probe the agreement between Brownian and Monte Carlo dynamics under the most general conditions. By properly rescaling the Monte Carlo time step, being related to any elementary move via the corresponding self-diffusion coefficient, with the acceptance rate of simultaneous trial displacements and rotations, we demonstrate the existence of a unique Monte Carlo time scale that allows for a direct comparison between Monte Carlo and Brownian dynamics simulations. To estimate the validity of our theoretical approach, we compare the mean square displacement of rods, their orientational autocorrelation function, and the self-intermediate scattering function, as obtained from Brownian dynamics and Monte Carlo simulations. The agreement between the results of these two approaches, even under the condition of heterogeneous dynamics generally observed in liquid crystalline phases, is excellent.
A new cubic phantom for PET/CT dosimetry: Experimental and Monte Carlo characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belinato, Walmir; Silva, Rogerio M.V.; Souza, Divanizia N.
In recent years, positron emission tomography (PET) associated with multidetector computed tomography (MDCT) has become a diagnostic technique widely disseminated to evaluate various malignant tumors and other diseases. However, during PET/CT examinations, the doses of ionizing radiation experienced by the internal organs of patients may be substantial. To study the doses involved in PET/CT procedures, a new cubic phantom of overlapping acrylic plates was developed and characterized. This phantom has a deposit for the placement of the fluorine-18 fluoro-2-deoxy-D-glucose ({sup 18}F-FDG) solution. There are also small holes near the faces for the insertion of optically stimulated luminescence dosimeters (OSLD). Themore » holes for OSLD are positioned at different distances from the {sup 18}F-FDG deposit. The experimental results were obtained in two PET/CT devices operating with different parameters. Differences in the absorbed doses were observed in OSLD measurements due to the non-orthogonal positioning of the detectors inside the phantom. This phantom was also evaluated using Monte Carlo simulations, with the MCNPX code. The phantom and the geometrical characteristics of the equipment were carefully modeled in the MCNPX code, in order to develop a new methodology form comparison of experimental and simulated results, as well as to allow the characterization of PET/CT equipments in Monte Carlo simulations. All results showed good agreement, proving that this new phantom may be applied for these experiments. (authors)« less
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
Contrast cancellation technique applied to digital x-ray imaging using silicon strip detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avila, C.; Lopez, J.; Sanabria, J. C.
2005-12-15
Dual-energy mammographic imaging experimental tests have been performed using a compact dichromatic imaging system based on a conventional x-ray tube, a mosaic crystal, and a 384-strip silicon detector equipped with full-custom electronics with single photon counting capability. For simulating mammal tissue, a three-component phantom, made of Plexiglass, polyethylene, and water, has been used. Images have been collected with three different pairs of x-ray energies: 16-32 keV, 18-36 keV, and 20-40 keV. A Monte Carlo simulation of the experiment has also been carried out using the MCNP-4C transport code. The Alvarez-Macovski algorithm has been applied both to experimental and simulated datamore » to remove the contrast between two of the phantom materials so as to enhance the visibility of the third one.« less
GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA
NASA Astrophysics Data System (ADS)
Spiechowicz, J.; Kostur, M.; Machura, L.
2015-06-01
This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.
NASA Astrophysics Data System (ADS)
Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher
2017-11-01
Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.
Image-guided spatial localization of heterogeneous compartments for magnetic resonance
An, Li; Shen, Jun
2015-01-01
Purpose: Image-guided localization SPectral Localization Achieved by Sensitivity Heterogeneity (SPLASH) allows rapid measurement of signals from irregularly shaped anatomical compartments without using phase encoding gradients. Here, the authors propose a novel method to address the issue of heterogeneous signal distribution within the localized compartments. Methods: Each compartment was subdivided into multiple subcompartments and their spectra were solved by Tikhonov regularization to enforce smoothness within each compartment. The spectrum of a given compartment was generated by combining the spectra of the components of that compartment. The proposed method was first tested using Monte Carlo simulations and then applied to reconstructing in vivo spectra from irregularly shaped ischemic stroke and normal tissue compartments. Results: Monte Carlo simulations demonstrate that the proposed regularized SPLASH method significantly reduces localization and metabolite quantification errors. In vivo results show that the intracompartment regularization results in ∼40% reduction of error in metabolite quantification. Conclusions: The proposed method significantly reduces localization errors and metabolite quantification errors caused by intracompartment heterogeneous signal distribution. PMID:26328977
Monte Carlo simulation of the transmission of measles: Beyond the mass action principle
NASA Astrophysics Data System (ADS)
Zekri, Nouredine; Clerc, Jean Pierre
2002-04-01
We present a Monte Carlo simulation of the transmission of measles within a population sample during its growing and equilibrium states by introducing two different vaccination schedules of one and two doses. We study the effects of the contact rate per unit time ξ as well as the initial conditions on the persistence of the disease. We found a weak effect of the initial conditions while the disease persists when ξ lies in the range 1/L-10/L (L being the latent period). Further comparison with existing data, prediction of future epidemics and other estimations of the vaccination efficiency are provided. Finally, we compare our approach to the models using the mass action principle in the first and another epidemic region and found the incidence independent of the number of susceptibles after the epidemic peak while it strongly fluctuates in its growing region. This method can be easily applied to other human, animal, and plant diseases and includes more complicated parameters.
Orientation-dependent integral equation theory for a two-dimensional model of water
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.
2003-03-01
We develop an integral equation theory that applies to strongly associating orientation-dependent liquids, such as water. In an earlier treatment, we developed a Wertheim integral equation theory (IET) that we tested against NPT Monte Carlo simulations of the two-dimensional Mercedes Benz model of water. The main approximation in the earlier calculation was an orientational averaging in the multidensity Ornstein-Zernike equation. Here we improve the theory by explicit introduction of an orientation dependence in the IET, based upon expanding the two-particle angular correlation function in orthogonal basis functions. We find that the new orientation-dependent IET (ODIET) yields a considerable improvement of the predicted structure of water, when compared to the Monte Carlo simulations. In particular, ODIET predicts more long-range order than the original IET, with hexagonal symmetry, as expected for the hydrogen bonded ice in this model. The new theoretical approximation still errs in some subtle properties; for example, it does not predict liquid water's density maximum with temperature or the negative thermal expansion coefficient.
A two-dimensional model of water: Theory and computer simulations
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Southall, N. T.; Dill, K. A.
2000-02-01
We develop an analytical theory for a simple model of liquid water. We apply Wertheim's thermodynamic perturbation theory (TPT) and integral equation theory (IET) for associative liquids to the MB model, which is among the simplest models of water. Water molecules are modeled as 2-dimensional Lennard-Jones disks with three hydrogen bonding arms arranged symmetrically, resembling the Mercedes-Benz (MB) logo. The MB model qualitatively predicts both the anomalous properties of pure water and the anomalous solvation thermodynamics of nonpolar molecules. IET is based on the orientationally averaged version of the Ornstein-Zernike equation. This is one of the main approximations in the present work. IET correctly predicts the pair correlation function of the model water at high temperatures. Both TPT and IET are in semi-quantitative agreement with the Monte Carlo values of the molar volume, isothermal compressibility, thermal expansion coefficient, and heat capacity. A major advantage of these theories is that they require orders of magnitude less computer time than the Monte Carlo simulations.
Carlacci, Louis; Millard, Charles B; Olson, Mark A
2004-10-01
The X-ray crystal structure of the reaction product of acetylcholinesterase (AChE) with the inhibitor diisopropylphosphorofluoridate (DFP) showed significant structural displacement in a loop segment of residues 287-290. To understand this conformational selection, a Monte Carlo (MC) simulation study was performed of the energy landscape for the loop segment. A computational strategy was applied by using a combined simulated annealing and room temperature Metropolis sampling approach with solvent polarization modeled by a generalized Born (GB) approximation. Results from thermal annealing reveal a landscape topology of broader basin opening and greater distribution of energies for the displaced loop conformation, while the ensemble average of conformations at 298 K favored a shift in populations toward the native by a free-energy difference in good agreement with the estimated experimental value. Residue motions along a reaction profile of loop conformational reorganization are proposed where Arg-289 is critical in determining electrostatic effects of solvent interaction versus Coulombic charging.
NASA Astrophysics Data System (ADS)
He, Ling-Yun; Qian, Wen-Bin
2012-07-01
A correct or precise estimation of the Hurst exponent is one of the fundamentally important problems in the financial economics literature. There are three widely used tools to estimate the Hurst exponent, the canonical rescaled range (R/S), the variance rescaled statistic (V/S) and the Modified rescaled range (Modified R/S). To clarify their performance, we compare them by Monte Carlo simulations; we generate many time-series of a fractal Brownian motion, of a Weierstrass-Mandelbrot cosine fractal function and of a fractionally integrated process, whose theoretical Hurst exponents are known, to compare the Hurst exponents estimated by the three methods. To better understand their pragmatic performance, we further apply all of these methods empirically in real-world applications. Our results imply it is not appropriate to conclude simply which method is better as V/S performs better when the analyzed market is anti-persistent while R/S seems to be a reliable tool used in persistent market.
Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Qin; Jiang, Xiao-Guo
2012-09-01
To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator, the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data. As is known, the spectrum estimation from transmission data is an ill-conditioned problem. The method based on iterative perturbations is employed to derive the X-ray spectra, where initial guesses are used to start the process. This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function. In this work, various filter materials are put to use as the attenuator, and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated. The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.
Efficient 3D kinetic Monte Carlo method for modeling of molecular structure and dynamics.
Panshenskov, Mikhail; Solov'yov, Ilia A; Solov'yov, Andrey V
2014-06-30
Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features an extension of a popular code MBN EXPLORER (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN EXPLORER that allows simulating stochastic processes using the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system. Copyright © 2014 Wiley Periodicals, Inc.
Mondal, Nagendra Nath
2009-01-01
This study presents Monte Carlo Simulation (MCS) results of detection efficiencies, spatial resolutions and resolving powers of a time-of-flight (TOF) PET detector systems. Cerium activated Lutetium Oxyorthosilicate (Lu2SiO5: Ce in short LSO), Barium Fluoride (BaF2) and BriLanCe 380 (Cerium doped Lanthanum tri-Bromide, in short LaBr3) scintillation crystals are studied in view of their good time and energy resolutions and shorter decay times. The results of MCS based on GEANT show that spatial resolution, detection efficiency and resolving power of LSO are better than those of BaF2 and LaBr3, although it possesses inferior time and energy resolutions. Instead of the conventional position reconstruction method, newly established image reconstruction (talked about in the previous work) method is applied to produce high-tech images. Validation is a momentous step to ensure that this imaging method fulfills all purposes of motivation discussed by reconstructing images of two tumors in a brain phantom. PMID:20098551
Kinetic Monte Carlo and cellular particle dynamics simulations of multicellular systems
NASA Astrophysics Data System (ADS)
Flenner, Elijah; Janosi, Lorant; Barz, Bogdan; Neagu, Adrian; Forgacs, Gabor; Kosztin, Ioan
2012-03-01
Computer modeling of multicellular systems has been a valuable tool for interpreting and guiding in vitro experiments relevant to embryonic morphogenesis, tumor growth, angiogenesis and, lately, structure formation following the printing of cell aggregates as bioink particles. Here we formulate two computer simulation methods: (1) a kinetic Monte Carlo (KMC) and (2) a cellular particle dynamics (CPD) method, which are capable of describing and predicting the shape evolution in time of three-dimensional multicellular systems during their biomechanical relaxation. Our work is motivated by the need of developing quantitative methods for optimizing postprinting structure formation in bioprinting-assisted tissue engineering. The KMC and CPD model parameters are determined and calibrated by using an original computational-theoretical-experimental framework applied to the fusion of two spherical cell aggregates. The two methods are used to predict the (1) formation of a toroidal structure through fusion of spherical aggregates and (2) cell sorting within an aggregate formed by two types of cells with different adhesivities.
Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity
NASA Astrophysics Data System (ADS)
Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.
2018-07-01
The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
Multiple-time-stepping generalized hybrid Monte Carlo methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escribano, Bruno, E-mail: bescribano@bcamath.org; Akhmatskaya, Elena; IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC).more » The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.« less
Gray: a ray tracing-based Monte Carlo simulator for PET.
Freese, David L; Olcott, Peter D; Buss, Samuel R; Levin, Craig S
2018-05-21
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a [Formula: see text] speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within [Formula: see text]% when accounting for differences in peak NECR. We also estimate the peak NECR to be [Formula: see text] kcps, or within [Formula: see text]% of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface.
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-12-01
The Monte Carlo simulation method is applied to study the relaxation of excited electrons in monolayer graphene. The presence of spin polarized background electrons population, with density corresponding to highly degenerate conditions is assumed. Formulas of electron-electron scattering rates, which properly account for electrons presence in two energetically degenerate, inequivalent valleys in this material are presented. The electron relaxation process can be divided into two phases: thermalization and cooling, which can be clearly distinguished when examining the standard deviation of electron energy distribution. The influence of the exchange effect in interactions between electrons with parallel spins is shown to be important only in transient conditions, especially during the thermalization phase.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.
1998-01-01
An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.
2009-07-01
simulation. The pilot described in this paper used this two-step approach within a Define, Measure, Analyze, Improve, and Control ( DMAIC ) framework to...networks, BBN, Monte Carlo simulation, DMAIC , Six Sigma, business case 15. NUMBER OF PAGES 35 16. PRICE CODE 17. SECURITY CLASSIFICATION OF
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-21
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
NASA Astrophysics Data System (ADS)
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-01
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
Yoo, Brian; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Jusufi, Arben; Maginn, Edward J
2017-09-26
We present a newly developed Monte Carlo scheme to predict bulk surfactant concentrations and surface tensions at the air-water interface for various surfactant interfacial coverages. Since the concentration regimes of these systems of interest are typically very dilute (≪10 -5 mol. frac.), Monte Carlo simulations with the use of insertion/deletion moves can provide the ability to overcome finite system size limitations that often prohibit the use of modern molecular simulation techniques. In performing these simulations, we use the discrete fractional component Monte Carlo (DFCMC) method in the Gibbs ensemble framework, which allows us to separate the bulk and air-water interface into two separate boxes and efficiently swap tetraethylene glycol surfactants C 10 E 4 between boxes. Combining this move with preferential translations, volume biased insertions, and Wang-Landau biasing vastly enhances sampling and helps overcome the classical "insertion problem", often encountered in non-lattice Monte Carlo simulations. We demonstrate that this methodology is both consistent with the original molecular thermodynamic theory (MTT) of Blankschtein and co-workers, as well as their recently modified theory (MD/MTT), which incorporates the results of surfactant infinite dilution transfer free energies and surface tension calculations obtained from molecular dynamics simulations.
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Tennant, Marc; Kruger, Estie
2013-02-01
This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT<>0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated. © 2013 FDI World Dental Federation.
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
ERIC Educational Resources Information Center
Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.
2004-01-01
The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…
NASA Astrophysics Data System (ADS)
Honda, Norihiro; Hazama, Hisanao; Awazu, Kunio
2017-02-01
The interstitial photodynamic therapy (iPDT) with 5-aminolevulinic acid (5-ALA) is a safe and feasible treatment modality of malignant glioblastoma. In order to cover the tumour volume, the exact position of the light diffusers within the lesion is needed to decide precisely. The aim of this study is the development of evaluation method of treatment volume with 3D Monte Carlo simulation for iPDT using 5-ALA. Monte Carlo simulations of fluence rate were performed using the optical properties of the brain tissue infiltrated by tumor cells and normal tissue. 3-D Monte Carlo simulation was used to calculate the position of the light diffusers within the lesion and light transport. The fluence rate near the diffuser was maximum and decreased exponentially with distance. The simulation can calculate the amount of singlet oxygen generated by PDT. In order to increase the accuracy of simulation results, the parameter for simulation includes the quantum yield of singlet oxygen generation, the accumulated concentration of photosensitizer within tissue, fluence rate, molar extinction coefficient at the wavelength of excitation light. The simulation is useful for evaluation of treatment region of iPDT with 5-ALA.
The ensemble switch method for computing interfacial tensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitz, Fabian; Virnau, Peter
2015-04-14
We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.
Modeling Lidar Multiple Scattering
NASA Astrophysics Data System (ADS)
Sato, Kaori; Okamoto, Hajime; Ishimoto, Hiroshi
2016-06-01
A practical model to simulate multiply scattered lidar returns from inhomogeneous cloud layers are developed based on Backward Monte Carlo (BMC) simulations. The estimated time delay of the backscattered intensities returning from different vertical grids by the developed model agreed well with that directly obtained from BMC calculations. The method was applied to the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite data to improve the synergetic retrieval of cloud microphysics with CloudSat radar data at optically thick cloud grids. Preliminary results for retrieving mass fraction of co-existing cloud particles and drizzle size particles within lowlevel clouds are demonstrated.
Direct Simulation of Reentry Flows with Ionization
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1989-01-01
The Direct Simulation Monte Carlo (DSMC) method is applied in this paper to the study of rarefied, hypersonic, reentry flows. The assumptions and simplifications involved with the treatment of ionization, free electrons and the electric field are investigated. A new method is presented for the calculation of the electric field and handling of charged particles with DSMC. In addition, a two-step model for electron impact ionization is implemented. The flow field representing a 10 km/sec shock at an altitude of 65 km is calculated. The effects of the new modeling techniques on the calculation results are presented and discussed.
The evaluation of 6 and 18 MeV electron beams for small animal irradiation
NASA Astrophysics Data System (ADS)
Chao, T. C.; Chen, A. M.; Tu, S. J.; Tung, C. J.; Hong, J. H.; Lee, C. C.
2009-10-01
A small animal irradiator is critical for providing optimal radiation dose distributions for pre-clinical animal studies. This paper focuses on the evaluation of using 6 or 18 MeV electron beams as small animal irradiators. Compared with all other prototypes which use photons to irradiate small animals, an electron irradiator has many advantages in its shallow dose distribution. Two major approaches including simulation and measurement were used to evaluate the feasibility of applying electron beams in animal irradiation. These simulations and measurements were taken in three different fields (a 6 cm × 6 cm square field, and 4 mm and 30 mm diameter circular fields) and with two different energies (6 MeV and 18 MeV). A PTW Semiflex chamber in a PTW-MP3 water tank, a PTW Markus chamber type 23343, a PTW diamond detector type 60003 and KODAK XV films were used to measure PDDs, lateral beam profiles and output factors for either optimizing parameters of Monte Carlo simulation or to verify Monte Carlo simulation in small fields. Results show good agreement for comparisons of percentage depth doses (<=2.5% for 6 MeV e; <=1.8% for 18 MeV e) and profiles (FWHM <= 0.5 mm) between simulations and measurements on the 6 cm field. Greater deviation can be observed in the 4 mm field, which is mainly caused by the partial volume effects of the detectors. The FWHM of the profiles for the 18 MeV electron beam is 32.6 mm in the 30 mm field, and 4.7 mm in the 4 mm field at d90. It will take 1-13 min to complete one irradiation of 5-10 Gy. In addition, two different digital phantoms were also constructed, including a homogeneous cylindrical water phantom and a CT-based heterogeneous mouse phantom, and were implemented into Monte Carlo to simulate dose distribution with different electron irradiations.
Magnetization switching process in a torus nanoring with easy-plane surface anisotropy
NASA Astrophysics Data System (ADS)
Alzate-Cardona, J. D.; Sabogal-Suárez, D.; Restrepo-Parra, E.
2017-11-01
We have studied the effects of surface shape anisotropy in the magnetization behavior of a torus nanoring by means of Monte Carlo simulations. Stable states (vortex and reverse vortex states) and metastable states (onion and asymmetric onion states) were found in the torus nanoring. The probability of occurrence of the metastable states (stable states) tends to decrease (increase) as the amount of Monte Carlo steps per spin, temperature steps and negative values of the anisotropy constant increase. We evaluated under which conditions it is possible to switch the magnetic state of the torus nanoring from a vortex to a reverse vortex state by applying a circular magnetic field at certain temperature interval. The switching probability (from a vortex to a reverse vortex state) depends on the value of the current intensity, which generates the circular magnetic field, and the temperature interval where the magnetic field is applied. There is a linear relationship between the current intensity and the minimum temperature interval above which the vortex state can be switched.
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Monte Carlo Simulation of a Segmented Detector for Low-Energy Electron Antineutrinos
NASA Astrophysics Data System (ADS)
Qomi, H. Akhtari; Safari, M. J.; Davani, F. Abbasi
2017-11-01
Detection of low-energy electron antineutrinos is of importance for several purposes, such as ex-vessel reactor monitoring, neutrino oscillation studies, etc. The inverse beta decay (IBD) is the interaction that is responsible for detection mechanism in (organic) plastic scintillation detectors. Here, a detailed study will be presented dealing with the radiation and optical transport simulation of a typical segmented antineutrino detector withMonte Carlo method using MCNPX and FLUKA codes. This study shows different aspects of the detector, benefiting from inherent capabilities of the Monte Carlo simulation codes.
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
Coulomb interactions in charged fluids.
Vernizzi, Graziano; Guerrero-García, Guillermo Iván; de la Cruz, Monica Olvera
2011-07-01
The use of Ewald summation schemes for calculating long-range Coulomb interactions, originally applied to ionic crystalline solids, is a very common practice in molecular simulations of charged fluids at present. Such a choice imposes an artificial periodicity which is generally absent in the liquid state. In this paper we propose a simple analytical O(N(2)) method which is based on Gauss's law for computing exactly the Coulomb interaction between charged particles in a simulation box, when it is averaged over all possible orientations of a surrounding infinite lattice. This method mitigates the periodicity typical of crystalline systems and it is suitable for numerical studies of ionic liquids, charged molecular fluids, and colloidal systems with Monte Carlo and molecular dynamics simulations.
Real-time, ray casting-based scatter dose estimation for c-arm x-ray system.
Alnewaini, Zaid; Langer, Eric; Schaber, Philipp; David, Matthias; Kretz, Dominik; Steil, Volker; Hesser, Jürgen
2017-03-01
Dosimetric control of staff exposure during interventional procedures under fluoroscopy is of high relevance. In this paper, a novel ray casting approximation of radiation transport is presented and the potential and limitation vs. a full Monte Carlo transport and dose measurements are discussed. The x-ray source of a Siemens Axiom Artix C-arm is modeled by a virtual source model using single Gaussian-shaped source. A Geant4-based Monte Carlo simulation determines the radiation transport from the source to compute scatter from the patient, the table, the ceiling and the floor. A phase space around these scatterers stores all photon information. Only those photons are traced that hit a surface of phantom that represents medical staff in the treatment room, no indirect scattering is considered; and a complete dose deposition on the surface is calculated. To evaluate the accuracy of the approximation, both experimental measurements using Thermoluminescent dosimeters (TLDs) and a Geant4-based Monte Carlo simulation of dose depositing for different tube angulations of the C-arm from cranial-caudal angle 0° and from LAO (Left Anterior Oblique) 0°-90° are realized. Since the measurements were performed on both sides of the table, using the symmetry of the setup, RAO (Right Anterior Oblique) measurements were not necessary. The Geant4-Monte Carlo simulation agreed within 3% with the measured data, which is within the accuracy of measurement and simulation. The ray casting approximation has been compared to TLD measurements and the achieved percentage difference was -7% for data from tube angulations 45°-90° and -29% from tube angulations 0°-45° on the side of the x-ray source, whereas on the opposite side of the x-ray source, the difference was -83.8% and -75%, respectively. Ray casting approximation for only LAO 90° was compared to a Monte Carlo simulation, where the percentage differences were between 0.5-3% on the side of the x-ray source where the highest dose usually detected was mainly from primary scattering (photons), whereas percentage differences between 2.8-20% are found on the side opposite to the x-ray source, where the lowest doses were detected. Dose calculation time of our approach was 0.85 seconds. The proposed approach yields a fast scatter dose estimation where we could run the Monte Carlo simulation only once for each x-ray tube angulation to get the Phase Space Files (PSF) for being used later by our ray casting approach to calculate the dose from only photons which will hit an movable elliptical cylinder shaped phantom and getting an output file for the positions of those hits to be used for visualizing the scatter dose propagation on the phantom surface. With dose calculation times of less than one second, we are saving much time compared to using a Monte Carlo simulation instead. With our approach, larger deviations occur only in regions with very low doses, whereas it provides a high precision in high-dose regions. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.
2013-10-15
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less
Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Ritsch, E.; Atlas Collaboration
2014-06-01
The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.
NASA Astrophysics Data System (ADS)
Sang, Chaofeng; Sun, Jizhong; Wang, Dezhen
2010-02-01
A particle-in-cell (PIC) plus Monte Carlo collision simulation is employed to investigate how a sustainable atmospheric pressure single dielectric-barrier discharge responds to a high-voltage nanosecond pulse (HVNP) further applied to the metal electrode. The results show that the HVNP can significantly increase the plasma density in the pulse-on period. The ion-induced secondary electrons can give rise to avalanche ionization in the positive sheath, which widens the discharge region and enhances the plasma density drastically. However, the plasma density stops increasing as the applied pulse lasts over certain time; therefore, lengthening the pulse duration alone cannot improve the discharge efficiency further. Physical reasons for these phenomena are then discussed.
Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians
NASA Astrophysics Data System (ADS)
Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan
2018-02-01
Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
On the modeling of the 2010 Gulf of Mexico Oil Spill
NASA Astrophysics Data System (ADS)
Mariano, A. J.; Kourafalou, V. H.; Srinivasan, A.; Kang, H.; Halliwell, G. R.; Ryan, E. H.; Roffer, M.
2011-09-01
Two oil particle trajectory forecasting systems were developed and applied to the 2010 Deepwater Horizon Oil Spill in the Gulf of Mexico. Both systems use ocean current fields from high-resolution numerical ocean circulation model simulations, Lagrangian stochastic models to represent unresolved sub-grid scale variability to advect oil particles, and Monte Carlo-based schemes for representing uncertain biochemical and physical processes. The first system assumes two-dimensional particle motion at the ocean surface, the oil is in one state, and the particle removal is modeled as a Monte Carlo process parameterized by a one number removal rate. Oil particles are seeded using both initial conditions based on observations and particles released at the location of the Maconda well. The initial conditions (ICs) of oil particle location for the two-dimensional surface oil trajectory forecasts are based on a fusing of all available information including satellite-based analyses. The resulting oil map is digitized into a shape file within which a polygon filling software generates longitude and latitude with variable particle density depending on the amount of oil present in the observations for the IC. The more complex system assumes three (light, medium, heavy) states for the oil, each state has a different removal rate in the Monte Carlo process, three-dimensional particle motion, and a particle size-dependent oil mixing model. Simulations from the two-dimensional forecast system produced results that qualitatively agreed with the uncertain "truth" fields. These simulations validated the use of our Monte Carlo scheme for representing oil removal by evaporation and other weathering processes. Eulerian velocity fields for predicting particle motion from data-assimilative models produced better particle trajectory distributions than a free running model with no data assimilation. Monte Carlo simulations of the three-dimensional oil particle trajectory, whose ensembles were generated by perturbing the size of the oil particles and the fraction in a given size range that are released at depth, the two largest unknowns in this problem. 36 realizations of the model were run with only subsurface oil releases. An average of these results yields that after three months, about 25% of the oil remains in the water column and that most of the oil is below 800 m.
An Overview of Importance Splitting for Rare Event Simulation
ERIC Educational Resources Information Center
Morio, Jerome; Pastel, Rudy; Le Gland, Francois
2010-01-01
Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…
Badal, Andreu; Badano, Aldo
2009-11-01
It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
First-principles Monte Carlo simulations of reaction equilibria in compressed vapors
Fetisov, Evgenii O.; Kuo, I-Feng William; Knight, Chris; ...
2016-06-13
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gasmore » equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO 2 and N 2O in mole fractions approaching 1%, whereas N 3 and O 3 are not observed. Lastly, the equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data.« less
First-principles Monte Carlo simulations of reaction equilibria in compressed vapors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fetisov, Evgenii O.; Kuo, I-Feng William; Knight, Chris
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gasmore » equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO 2 and N 2O in mole fractions approaching 1%, whereas N 3 and O 3 are not observed. Lastly, the equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data.« less
2015-01-01
Many commonly used coarse-grained models for proteins are based on simplified interaction sites and consequently may suffer from significant limitations, such as the inability to properly model protein secondary structure without the addition of restraints. Recent work on a benzene fluid (LettieriS.; ZuckermanD. M.J. Comput. Chem.2012, 33, 268−27522120971) suggested an alternative strategy of tabulating and smoothing fully atomistic orientation-dependent interactions among rigid molecules or fragments. Here we report our initial efforts to apply this approach to the polar and covalent interactions intrinsic to polypeptides. We divide proteins into nearly rigid fragments, construct distance and orientation-dependent tables of the atomistic interaction energies between those fragments, and apply potential energy smoothing techniques to those tables. The amount of smoothing can be adjusted to give coarse-grained models that range from the underlying atomistic force field all the way to a bead-like coarse-grained model. For a moderate amount of smoothing, the method is able to preserve about 70–90% of the α-helical structure while providing a factor of 3–10 improvement in sampling per unit computation time (depending on how sampling is measured). For a greater amount of smoothing, multiple folding–unfolding transitions of the peptide were observed, along with a factor of 10–100 improvement in sampling per unit computation time, although the time spent in the unfolded state was increased compared with less smoothed simulations. For a β hairpin, secondary structure is also preserved, albeit for a narrower range of the smoothing parameter and, consequently, for a more modest improvement in sampling. We have also applied the new method in a “resolution exchange” setting, in which each replica runs a Monte Carlo simulation with a different degree of smoothing. We obtain exchange rates that compare favorably to our previous efforts at resolution exchange (LymanE.; ZuckermanD. M.J. Chem. Theory Comput.2006, 2, 656−666). PMID:25400525
Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo
NASA Astrophysics Data System (ADS)
Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső
2017-03-01
In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolyte model. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge, electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.
Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső
2017-03-28
In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolytemodel. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge,electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.
Monte Carlo simulation of electrothermal atomization on a desktop personal computer
NASA Astrophysics Data System (ADS)
Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.
1996-07-01
Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.
Henderson, Douglas; Silvestre-Alcantara, Whasington; Kaja, Monika; ...
2016-08-18
Here, the density functional theory is applied to a study of the structure and differential capacitance of a planar electric double layer formed by a valency asymmetric mixture of charged dimers and monomers. The dimer consists of two tangentially tethered hard spheres of equal diameters of which one is charged and the other is neutral, while the monomer is a charged hard sphere of the same size. The dimer electrolyte is next to a uniformly charged, smooth planar electrode. The electrode-particle singlet distributions, the mean electrostatic potential, and the differential capacitance for the model double layer are evaluated for amore » 2:1/1:2 valency electrolyte at a given concentration. Important consequences of asymmetry in charges and in ion shapes are (i) a finite, non-zero potential of zero charge, and (ii) asymmetric shaped 2:1 and 1:2 capacitance curves which are not mirror images of each other. Comparisons of the density functional results with the corresponding Monte Carlo simulations show the theoretical predictions to be in good agreement with the simulations overall except near zero surface charge.« less
NASA Astrophysics Data System (ADS)
dos Santos, G. J.; Linares, D. H.; Ramirez-Pastor, A. J.
2018-04-01
The phase behaviour of aligned rigid rods of length k (k-mers) adsorbed on two-dimensional square lattices has been studied by Monte Carlo (MC) simulations and histogram reweighting technique. The k-mers, containing k identical units (each one occupying a lattice site) were deposited along one of the directions of the lattice. In addition, attractive lateral interactions were considered. The methodology was applied, particularly, to the study of the critical point of the condensation transition occurring in the system. The process was monitored by following the fourth order Binder cumulant as a function of temperature for different lattice sizes. The results, obtained for k ranging from 2 to 7, show that: (i) the transition coverage exhibits a decreasing behaviour when it is plotted as a function of the k-mer size and (ii) the transition temperature, Tc, exhibits a power law dependence on k, Tc ∼k 0 , 4, shifting to higher values as k increases. Comparisons with an analytical model based on a generalization of the Bragg-Williams approximation (BWA) were performed in order to support the simulation technique. A significant qualitative agreement was obtained between BWA and MC results.
Peter, Emanuel K; Shea, Joan-Emma; Pivkin, Igor V
2016-05-14
In this paper, we present a coarse replica exchange molecular dynamics (REMD) approach, based on kinetic Monte Carlo (kMC). The new development significantly can reduce the amount of replicas and the computational cost needed to enhance sampling in protein simulations. We introduce 2 different methods which primarily differ in the exchange scheme between the parallel ensembles. We apply this approach on folding of 2 different β-stranded peptides: the C-terminal β-hairpin fragment of GB1 and TrpZip4. Additionally, we use the new simulation technique to study the folding of TrpCage, a small fast folding α-helical peptide. Subsequently, we apply the new methodology on conformation changes in signaling of the light-oxygen voltage (LOV) sensitive domain from Avena sativa (AsLOV2). Our results agree well with data reported in the literature. In simulations of dialanine, we compare the statistical sampling of the 2 techniques with conventional REMD and analyze their performance. The new techniques can reduce the computational cost of REMD significantly and can be used in enhanced sampling simulations of biomolecules.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
Evaluating the performance of distributed approaches for modal identification
NASA Astrophysics Data System (ADS)
Krishnan, Sriram S.; Sun, Zhuoxiong; Irfanoglu, Ayhan; Dyke, Shirley J.; Yan, Guirong
2011-04-01
In this paper two modal identification approaches appropriate for use in a distributed computing environment are applied to a full-scale, complex structure. The natural excitation technique (NExT) is used in conjunction with a condensed eigensystem realization algorithm (ERA), and the frequency domain decomposition with peak-picking (FDD-PP) are both applied to sensor data acquired from a 57.5-ft, 10 bay highway sign truss structure. Monte-Carlo simulations are performed on a numerical example to investigate the statistical properties and sensitivity to noise of the two distributed algorithms. Experimental results are provided and discussed.
Full System Model of Magnetron Sputter Chamber - Proof-of-Principle Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walton, C; Gilmer, G; Zepeda-Ruiz, L
2007-05-04
The lack of detailed knowledge of internal process conditions remains a key challenge in magnetron sputtering, both for chamber design and for process development. Fundamental information such as the pressure and temperature distribution of the sputter gas, and the energies and arrival angles of the sputtered atoms and other energetic species is often missing, or is only estimated from general formulas. However, open-source or low-cost tools are available for modeling most steps of the sputter process, which can give more accurate and complete data than textbook estimates, using only desktop computations. To get a better understanding of magnetron sputtering, wemore » have collected existing models for the 5 major process steps: the input and distribution of the neutral background gas using Direct Simulation Monte Carlo (DSMC), dynamics of the plasma using Particle In Cell-Monte Carlo Collision (PIC-MCC), impact of ions on the target using molecular dynamics (MD), transport of sputtered atoms to the substrate using DSMC, and growth of the film using hybrid Kinetic Monte Carlo (KMC) and MD methods. Models have been tested against experimental measurements. For example, gas rarefaction as observed by Rossnagel and others has been reproduced, and it is associated with a local pressure increase of {approx}50% which may strongly influence film properties such as stress. Results on energies and arrival angles of sputtered atoms and reflected gas neutrals are applied to the Kinetic Monte Carlo simulation of film growth. Model results and applications to growth of dense Cu and Be films are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudhyadhom, A; McGuinness, C; Descovich, M
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less
Nanothermodynamics Applied to Thermal Processes in Heterogeneous Materials
2012-08-03
models agree favorably with a wide range of measurements of local thermal and dynamic properties. Progress in understanding basic thermodynamic...Monte- Carlo (MC) simulations of the Ising model .7 The solid black lines in Fig. 4 show results using the uncorrected (Metropolis) algorithm on the...parameter g=0.5 (green, dash-dot), g=1 (black, solid ), and g=2 (blue, dash-dot-dot). Note the failure of the standard Ising model (g=0) to match
Phase transitions in Nowak Sznajd opinion dynamics
NASA Astrophysics Data System (ADS)
Wołoszyn, Maciej; Stauffer, Dietrich; Kułakowski, Krzysztof
2007-05-01
The Nowak modification of the Sznajd opinion dynamics model on the square lattice assumes that with probability β the opinions flip due to mass-media advertising from down to up, and vice versa. Besides, with probability α the Sznajd rule applies that a neighbour pair agreeing in its two opinions convinces all its six neighbours of that opinion. Our Monte Carlo simulations and mean-field theory find sharp phase transitions in the parameter space.
Monte Carlo simulation of biomolecular systems with BIOMCSIM
NASA Astrophysics Data System (ADS)
Kamberaj, H.; Helms, V.
2001-12-01
A new Monte Carlo simulation program, BIOMCSIM, is presented that has been developed in particular to simulate the behaviour of biomolecular systems, leading to insights and understanding of their functions. The computational complexity in Monte Carlo simulations of high density systems, with large molecules like proteins immersed in a solvent medium, or when simulating the dynamics of water molecules in a protein cavity, is enormous. The program presented in this paper seeks to provide these desirable features putting special emphasis on simulations in grand canonical ensembles. It uses different biasing techniques to increase the convergence of simulations, and periodic load balancing in its parallel version, to maximally utilize the available computer power. In periodic systems, the long-ranged electrostatic interactions can be treated by Ewald summation. The program is modularly organized, and implemented using an ANSI C dialect, so as to enhance its modifiability. Its performance is demonstrated in benchmark applications for the proteins BPTI and Cytochrome c Oxidase.
Self-Learning Monte Carlo Method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.
NASA Astrophysics Data System (ADS)
Schuler, A.; Kostro, A.; Huriet, B.; Galande, C.; Scartezzini, J.-L.
2008-08-01
One promising application of semiconductor nanostructures in the field of photovoltaics might be quantum dot solar concentrators. Quantum dot containing nanocomposite thin films are synthesized at EPFL-LESO by a low cost sol-gel process. In order to study the potential of the novel planar photoluminescent concentrators, reliable computer simulations are needed. A computer code for ray tracing simulations of quantum dot solar concentrators has been developed at EPFL-LESO on the basis of Monte Carlo methods that are applied to polarization-dependent reflection/transmission at interfaces, photon absorption by the semiconductor nanocrystals and photoluminescent reemission. The software allows importing measured or theoretical absorption/reemission spectra describing the photoluminescent properties of the quantum dots. Hereby the properties of photoluminescent reemission are described by a set of emission spectra depending on the energy of the incoming photon, allowing to simulate the photoluminescent emission using the inverse function method. By our simulations, the importance of two main factors is revealed, an emission spectrum matched to the spectral efficiency curve of the photovoltaic cell, and a large Stokes shift, which is advantageous for the lateral energy transport. No significant energy losses are implied when the quantum dots are contained within a nanocomposite coating instead of being dispersed in the entire volume of the pane. Together with the knowledge on the optoelectronical properties of suitable photovoltaic cells, the simulations allow to predict the total efficiency of the envisaged concentrating PV systems, and to optimize photoluminescent emission frequencies, optical densities, and pane dimensions.
Reconstructing in-vivo reflectance spectrum of pigmented skin lesion by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wang, Shuang; He, Qingli; Zhao, Jianhua; Lui, Harvey; Zeng, Haishan
2012-03-01
In dermatology applications, diffuse reflectance spectroscopy has been extensively investigated as a promising tool for the noninvasive method to distinguish melanoma from benign pigmented skin lesion (nevus), which is concentrated with the skin chromophores like melanin and hemoglobin. We carried out a theoretical study to examine melanin distribution in human skin tissue and establish a practical optical model for further pigmented skin investigation. The theoretical simulation was using junctional nevus as an example. A multiple layer skin optical model was developed on established anatomy structures of skin, the published optical parameters of different skin layers, blood and melanin. Monte Carlo simulation was used to model the interaction between excitation light and skin tissue and rebuild the diffuse reflectance process from skin tissue. A testified methodology was adopted to determine melanin contents in human skin based on in vivo diffuse reflectance spectra. The rebuild diffuse reflectance spectra were investigated by adding melanin into different layers of the theoretical model. One of in vivo reflectance spectra from Junctional nevi and their surrounding normal skin was studied by compare the ratio between nevus and normal skin tissue in both the experimental and simulated diffuse reflectance spectra. The simulation result showed a good agreement with our clinical measurements, which indicated that our research method, including the spectral ratio method, skin optical model and modifying the melanin content in the model, could be applied in further theoretical simulation of pigmented skin lesions.
Renner, Franziska
2016-09-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.
Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.
ERIC Educational Resources Information Center
Oulman, Charles S.; Lee, Motoko Y.
Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…
Monte Carlo Particle Lists: MCPL
NASA Astrophysics Data System (ADS)
Kittelmann, T.; Klinkby, E.; Knudsen, E. B.; Willendrup, P.; Cai, X. X.; Kanaki, K.
2017-09-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
ERIC Educational Resources Information Center
Houser, Larry L.
1981-01-01
Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
Simulation of Radiation Damage to Neural Cells with the Geant4-DNA Toolkit
NASA Astrophysics Data System (ADS)
Bayarchimeg, Lkhagvaa; Batmunkh, Munkhbaatar; Belov, Oleg; Lkhagva, Oidov
2018-02-01
To help in understanding the physical and biological mechanisms underlying effects of cosmic and therapeutic types of radiation on the central nervous system (CNS), we have developed an original neuron application based on the Geant4 Monte Carlo simulation toolkit, in particular on its biophysical extension Geant4-DNA. The applied simulation technique provides a tool for the simulation of physical, physico-chemical and chemical processes (e.g. production of water radiolysis species in the vicinity of neurons) in realistic geometrical model of neural cells exposed to ionizing radiation. The present study evaluates the microscopic energy depositions and water radiolysis species yields within a detailed structure of a selected neuron taking into account its soma, dendrites, axon and spines following irradiation with carbon and iron ions.
Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki
2018-05-01
We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.
OBJECT KINETIC MONTE CARLO SIMULATIONS OF MICROSTRUCTURE EVOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.
2013-09-30
The objective is to report the development of the flexible object kinetic Monte Carlo (OKMC) simulation code KSOME (kinetic simulation of microstructure evolution) which can be used to simulate microstructure evolution of complex systems under irradiation. In this report we briefly describe the capabilities of KSOME and present preliminary results for short term annealing of single cascades in tungsten at various primary-knock-on atom (PKA) energies and temperatures.
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Evans, K. J.
2017-12-01
Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.
CGRO Guest Investigator Program
NASA Technical Reports Server (NTRS)
Begelman, Mitchell C.
1997-01-01
The following are highlights from the research supported by this grant: (1) Theory of gamma-ray blazars: We studied the theory of gamma-ray blazars, being among the first investigators to propose that the GeV emission arises from Comptonization of diffuse radiation surrounding the jet, rather than from the synchrotron-self-Compton mechanism. In related work, we uncovered possible connections between the mechanisms of gamma-ray blazars and those of intraday radio variability, and have conducted a general study of the role of Compton radiation drag on the dynamics of relativistic jets. (2) A Nonlinear Monte Carlo code for gamma-ray spectrum formation: We developed, tested, and applied the first Nonlinear Monte Carlo (NLMC) code for simulating gamma-ray production and transfer under much more general (and realistic) conditions than are accessible with other techniques. The present version of the code is designed to simulate conditions thought to be present in active galactic nuclei and certain types of X-ray binaries, and includes the physics needed to model thermal and nonthermal electron-positron pair cascades. Unlike traditional Monte-Carlo techniques, our method can accurately handle highly non-linear systems in which the radiation and particle backgrounds must be determined self-consistently and in which the particle energies span many orders of magnitude. Unlike models based on kinetic equations, our code can handle arbitrary source geometries and relativistic kinematic effects In its first important application following testing, we showed that popular semi-analytic accretion disk corona models for Seyfert spectra are seriously in error, and demonstrated how the spectra can be simulated if the disk is sparsely covered by localized 'flares'.
Influences of 3D PET scanner components on increased scatter evaluated by a Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Hirano, Yoshiyuki; Koshino, Kazuhiro; Iida, Hidehiro
2017-05-01
Monte Carlo simulation is widely applied to evaluate the performance of three-dimensional positron emission tomography (3D-PET). For accurate scatter simulations, all components that generate scatter need to be taken into account. The aim of this work was to identify the components that influence scatter. The simulated geometries of a PET scanner were: a precisely reproduced configuration including all of the components; a configuration with the bed, the tunnel and shields; a configuration with the bed and shields; and the simplest geometry with only the bed. We measured and simulated the scatter fraction using two different set-ups: (1) as prescribed by NEMA-NU 2007 and (2) a similar set-up but with a shorter line source, so that all activity was contained only inside the field-of-view (FOV), in order to reduce influences of components outside the FOV. The scatter fractions for the two experimental set-ups were, respectively, 45% and 38%. Regarding the geometrical configurations, the former two configurations gave simulation results in good agreement with the experimental results, but simulation results of the simplest geometry were significantly different at the edge of the FOV. From the simulation of the precise configuration, the object (scatter phantom) was the source of more than 90% of the scatter. This was also confirmed by visualization of photon trajectories. Then, the bed and the tunnel were mainly the sources of the rest of the scatter. From the simulation results, we concluded that the precise construction was not needed; the shields, the tunnel, the bed and the object were sufficient for accurate scatter simulations.
Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.
Yuan, J; Moses, G A; McKenty, P W
2005-10-01
A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.
A Monte Carlo method using octree structure in photon and electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, K.; Maeda, S.
Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that withmore » electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond
2016-04-15
Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J
Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of computemore » node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.« less
NASA Astrophysics Data System (ADS)
Caporali, E.; Chiarello, V.; Galeati, G.
2014-12-01
Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful tool to provide more robust estimation of the results obtained by direct statistical methods.
Rotational Brownian Dynamics simulations of clathrin cage formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilie, Ioana M.; Briels, Wim J.; MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede
2014-08-14
The self-assembly of nearly rigid proteins into ordered aggregates is well suited for modeling by the patchy particle approach. Patchy particles are traditionally simulated using Monte Carlo methods, to study the phase diagram, while Brownian Dynamics simulations would reveal insights into the assembly dynamics. However, Brownian Dynamics of rotating anisotropic particles gives rise to a number of complications not encountered in translational Brownian Dynamics. We thoroughly test the Rotational Brownian Dynamics scheme proposed by Naess and Elsgaeter [Macromol. Theory Simul. 13, 419 (2004); Naess and Elsgaeter Macromol. Theory Simul. 14, 300 (2005)], confirming its validity. We then apply the algorithmmore » to simulate a patchy particle model of clathrin, a three-legged protein involved in vesicle production from lipid membranes during endocytosis. Using this algorithm we recover time scales for cage assembly comparable to those from experiments. We also briefly discuss the undulatory dynamics of the polyhedral cage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Jae-ik; Yoo, SeungHoon; Cho, Sungho
Purpose: The significant issue of particle therapy such as proton and carbon ion was a accurate dose delivery from beam line to patient. For designing the complex delivery system, Monte Carlo simulation can be used for the simulation of various physical interaction in scatters and filters. In this report, we present the development of Monte Carlo simulation platform to help design the prototype of particle therapy nozzle and performed the Monte Carlo simulation using Geant4. Also we show the prototype design of particle therapy beam nozzle for Korea Heavy Ion Medical Accelerator (KHIMA) project in Korea Institute of Radiological andmore » Medical Science(KIRAMS) at Republic of Korea. Methods: We developed a simulation platform for particle therapy beam nozzle using Geant4. In this platform, the prototype nozzle design of Scanning system for carbon was simply designed. For comparison with theoretic beam optics, the beam profile on lateral distribution at isocenter is compared with Mont Carlo simulation result. From the result of this analysis, we can expected the beam spot property of KHIMA system and implement the spot size optimization for our spot scanning system. Results: For characteristics study of scanning system, various combination of the spot size from accerlator with ridge filter and beam monitor was tested as simple design for KHIMA dose delivery system. Conclusion: In this report, we presented the part of simulation platform and the characteristics study. This study is now on-going in order to develop the simulation platform including the beam nozzle and the dose verification tool with treatment planning system. This will be presented as soon as it is become available.« less
Different methodologies to quantify uncertainties of air emissions.
Romano, Daniela; Bernetti, Antonella; De Lauretis, Riccardo
2004-10-01
Characterization of the uncertainty associated with air emission estimates is of critical importance especially in the compilation of air emission inventories. In this paper, two different theories are discussed and applied to evaluate air emissions uncertainty. In addition to numerical analysis, which is also recommended in the framework of the United Nation Convention on Climate Change guidelines with reference to Monte Carlo and Bootstrap simulation models, fuzzy analysis is also proposed. The methodologies are discussed and applied to an Italian example case study. Air concentration values are measured from two electric power plants: a coal plant, consisting of two boilers and a fuel oil plant, of four boilers; the pollutants considered are sulphur dioxide (SO(2)), nitrogen oxides (NO(X)), carbon monoxide (CO) and particulate matter (PM). Monte Carlo, Bootstrap and fuzzy methods have been applied to estimate uncertainty of these data. Regarding Monte Carlo, the most accurate results apply to Gaussian distributions; a good approximation is also observed for other distributions with almost regular features either positive asymmetrical or negative asymmetrical. Bootstrap, on the other hand, gives a good uncertainty estimation for irregular and asymmetrical distributions. The logic of fuzzy analysis, where data are represented as vague and indefinite in opposition to the traditional conception of neatness, certain classification and exactness of the data, follows a different description. In addition to randomness (stochastic variability) only, fuzzy theory deals with imprecision (vagueness) of data. Fuzzy variance of the data set was calculated; the results cannot be directly compared with empirical data but the overall performance of the theory is analysed. Fuzzy theory may appear more suitable for qualitative reasoning than for a quantitative estimation of uncertainty, but it suits well when little information and few measurements are available and when distributions of data are not properly known.
Proposal of a method for evaluating tsunami risk using response-surface methodology
NASA Astrophysics Data System (ADS)
Fukutani, Y.
2017-12-01
Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response-surface and Monte Carlo simulation without conducting multiple tsunami numerical simulations.
Monte Carlo Methodology Serves Up a Software Success
NASA Technical Reports Server (NTRS)
2003-01-01
Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
NASA Astrophysics Data System (ADS)
Long, Ruiqi; McShane, Mike
2010-03-01
Dermally implanted luminescent sensors have been proposed for monitoring of tissue biochemistry, which has the potential to improve treatments for conditions such as diabetes and kidney failure. Effective in vivo monitoring via noninvasive transdermal measurement of emission from injected microparticles requires a matched optoelectronic system for excitation and collection of luminescence. We applied Monte Carlo modeling to predict the characteristics of output luminescence from microparticles in skin to facilitate hardware design. Three-dimensional, multiwavelength Monte Carlo simulations were used to determine the spatial and spectral distribution of the escaping luminescence for different implantation depths, excitation light source properties, particle characteristics, and particle packing density. Results indicate that the ratio of output emission to input excitation power ranged 10-3 to 10-6 for sensors at the upper and lower dermal boundaries, respectively, and 95% of the escaping emission photons induced by a 10-mm-diam excitation beam were confined within an 18-mm circle. Tightly packed sensor configurations yielded higher output intensity with fewer particles, even after luminophore concentration effects were removed. Most importantly, for the visible wavelengths studied, the ability to measure spectral changes in emission due to glucose changes was not significantly affected by absorption and scattering of tissue, which supports the potential to accurately track changes in luminescence of sensor implants that respond to the biochemistry of the skin.
Wu, Yunzhao; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892
MO-FG-BRA-01: 4D Monte Carlo Simulations for Verification of Dose Delivered to a Moving Anatomy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gholampourkashi, S; Cygler, J E.; The Ottawa Hospital Cancer Centre, Ottawa, ON
Purpose: To validate 4D Monte Carlo (MC) simulations of dose delivery by an Elekta Agility linear accelerator to a moving phantom. Methods: Monte Carlo simulations were performed using the 4DdefDOSXYZnrc/EGSnrc user code which samples a new geometry for each incident particle and calculates the dose in a continuously moving anatomy. A Quasar respiratory motion phantom with a lung insert containing a 3 cm diameter tumor was used for dose measurements on an Elekta Agility linac with the phantom in stationary and moving states. Dose to the center of tumor was measured using calibrated EBT3 film and the RADPOS 4D dosimetrymore » system. A VMAT plan covering the tumor was created on the static CT scan of the phantom using Monaco V.5.10.02. A validated BEAMnrc model of our Elekta Agility linac was used for Monte Carlo simulations on stationary and moving anatomies. To compare the planned and delivered doses, linac log files recorded during measurements were used for the simulations. For 4D simulations, deformation vectors that modeled the rigid translation of the lung insert were generated as input to the 4DdefDOSXYZnrc code as well as the phantom motion trace recorded with RADPOS during the measurements. Results: Monte Carlo simulations and film measurements were found to agree within 2mm/2% for 97.7% of points in the film in the static phantom and 95.5% in the moving phantom. Dose values based on film and RADPOS measurements are within 2% of each other and within 2σ of experimental uncertainties with respect to simulations. Conclusion: Our 4D Monte Carlo simulation using the defDOSXYZnrc code accurately calculates dose delivered to a moving anatomy. Future work will focus on more investigation of VMAT delivery on a moving phantom to improve the agreement between simulation and measurements, as well as establishing the accuracy of our method in a deforming anatomy. This work was supported by the Ontario Consortium of Adaptive Interventions in Radiation Oncology (OCAIRO), funded by the Ontario Research Fund Research Excellence program.« less
Experimental validation of the TOPAS Monte Carlo system for passive scattering proton therapy
Testa, M.; Schümann, J.; Lu, H.-M.; Shin, J.; Faddegon, B.; Perl, J.; Paganetti, H.
2013-01-01
Purpose: TOPAS (TOol for PArticle Simulation) is a particle simulation code recently developed with the specific aim of making Monte Carlo simulations user-friendly for research and clinical physicists in the particle therapy community. The authors present a thorough and extensive experimental validation of Monte Carlo simulations performed with TOPAS in a variety of setups relevant for proton therapy applications. The set of validation measurements performed in this work represents an overall end-to-end testing strategy recommended for all clinical centers planning to rely on TOPAS for quality assurance or patient dose calculation and, more generally, for all the institutions using passive-scattering proton therapy systems. Methods: The authors systematically compared TOPAS simulations with measurements that are performed routinely within the quality assurance (QA) program in our institution as well as experiments specifically designed for this validation study. First, the authors compared TOPAS simulations with measurements of depth-dose curves for spread-out Bragg peak (SOBP) fields. Second, absolute dosimetry simulations were benchmarked against measured machine output factors (OFs). Third, the authors simulated and measured 2D dose profiles and analyzed the differences in terms of field flatness and symmetry and usable field size. Fourth, the authors designed a simple experiment using a half-beam shifter to assess the effects of multiple Coulomb scattering, beam divergence, and inverse square attenuation on lateral and longitudinal dose profiles measured and simulated in a water phantom. Fifth, TOPAS’ capabilities to simulate time dependent beam delivery was benchmarked against dose rate functions (i.e., dose per unit time vs time) measured at different depths inside an SOBP field. Sixth, simulations of the charge deposited by protons fully stopping in two different types of multilayer Faraday cups (MLFCs) were compared with measurements to benchmark the nuclear interaction models used in the simulations. Results: SOBPs’ range and modulation width were reproduced, on average, with an accuracy of +1, −2 and ±3 mm, respectively. OF simulations reproduced measured data within ±3%. Simulated 2D dose-profiles show field flatness and average field radius within ±3% of measured profiles. The field symmetry resulted, on average in ±3% agreement with commissioned profiles. TOPAS accuracy in reproducing measured dose profiles downstream the half beam shifter is better than 2%. Dose rate function simulation reproduced the measurements within ∼2% showing that the four-dimensional modeling of the passively modulation system was implement correctly and millimeter accuracy can be achieved in reproducing measured data. For MLFCs simulations, 2% agreement was found between TOPAS and both sets of experimental measurements. The overall results show that TOPAS simulations are within the clinical accepted tolerances for all QA measurements performed at our institution. Conclusions: Our Monte Carlo simulations reproduced accurately the experimental data acquired through all the measurements performed in this study. Thus, TOPAS can reliably be applied to quality assurance for proton therapy and also as an input for commissioning of commercial treatment planning systems. This work also provides the basis for routine clinical dose calculations in patients for all passive scattering proton therapy centers using TOPAS. PMID:24320505
Effect of the multiple scattering of electrons in Monte Carlo simulation of LINACS.
Vilches, Manuel; García-Pareja, Salvador; Guerrero, Rafael; Anguiano, Marta; Lallena, Antonio M
2008-01-01
Results obtained from Monte Carlo simulations of the transport of electrons in thin slabs of dense material media and air slabs with different widths are analyzed. Various general purpose Monte Carlo codes have been used: PENELOPE, GEANT3, GEANT4, EGSNRC, MCNPX. Non-negligible differences between the angular and radial distributions after the slabs have been found. The effects of these differences on the depth doses measured in water are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, Andreu; Badano, Aldo
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Ellis; Derek Gaston; Benoit Forget
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less
Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J
2016-08-25
We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison.
Relaxation mode analysis of a peptide system: comparison with principal component analysis.
Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi
2011-10-28
This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.
Simulation of the Simbol-X Telescope
NASA Astrophysics Data System (ADS)
Chauvin, M.; Roques, J. P.
2009-05-01
We have developed a simulation tool for a Wolter I telescope operating in formation flight. The aim is to understand and predict the behavior of the Simbol-X instrument. As the geometry is variable, formation flight introduces new challenges and complex implications. Our code, based on Monte Carlo ray tracing, computes the full photon trajectories up to the detector plane, along with the relative drifts of the two spacecrafts. It takes into account angle and energy dependent interactions of the photons with the mirrors and applies to any grazing incidence telescope. The resulting images of simulated sources from 0.1 keV to 100 keV allow us to optimize the configuration of the instrument and to assess the performance of the Simbol-X telescope.
DQE simulation of a-Se x-ray detectors using ARTEMIS
NASA Astrophysics Data System (ADS)
Fang, Yuan; Badano, Aldo
2016-03-01
Detective Quantum Efficiency (DQE) is one of the most important image quality metrics for evaluating the spatial resolution performance of flat-panel x-ray detectors. In this work, we simulate the DQE of amorphous selenium (a-Se) xray detectors with a detailed Monte Carlo transport code (ARTEMIS) for modeling semiconductor-based direct x-ray detectors. The transport of electron-hole pairs is achieved with a spatiotemporal model that accounts for recombination and trapping of carriers and Coulombic effects of space charge and external applied electric field. A range of x-ray energies has been simulated from 10 to 100 keV. The DQE results can be used to study the spatial resolution characteristics of detectors at different energies.
NASA Technical Reports Server (NTRS)
Queen, Eric M.; Omara, Thomas M.
1990-01-01
A realization of a stochastic atmosphere model for use in simulations is presented. The model provides pressure, density, temperature, and wind velocity as a function of latitude, longitude, and altitude, and is implemented in a three degree of freedom simulation package. This implementation is used in the Monte Carlo simulation of an aeroassisted orbital transfer maneuver and results are compared to those of a more traditional approach.
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.
ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations
NASA Astrophysics Data System (ADS)
Freitag, Marc Dewi
2013-02-01
ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).
Data decomposition of Monte Carlo particle transport simulations via tally servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Split Orthogonal Group: A Guiding Principle for Sign-Problem-Free Fermionic Simulations
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Iazzi, Mauro; Troyer, Matthias; Harcos, Gergely
2015-12-01
We present a guiding principle for designing fermionic Hamiltonians and quantum Monte Carlo (QMC) methods that are free from the infamous sign problem by exploiting the Lie groups and Lie algebras that appear naturally in the Monte Carlo weight of fermionic QMC simulations. Specifically, rigorous mathematical constraints on the determinants involving matrices that lie in the split orthogonal group provide a guideline for sign-free simulations of fermionic models on bipartite lattices. This guiding principle not only unifies the recent solutions of the sign problem based on the continuous-time quantum Monte Carlo methods and the Majorana representation, but also suggests new efficient algorithms to simulate physical systems that were previously prohibitive because of the sign problem.
Microdome-gooved Gd(2)O(2)S:Tb scintillator for flexible and high resolution digital radiography.
Jung, Phill Gu; Lee, Chi Hoon; Bae, Kong Myeong; Lee, Jae Min; Lee, Sang Min; Lim, Chang Hwy; Yun, Seungman; Kim, Ho Kyung; Ko, Jong Soo
2010-07-05
A flexible microdome-grooved Gd(2)O(2)S:Tb scintillator is simulated, fabricated, and characterized for digital radiography applications. According to Monte Carlo simulation results, the dome-grooved structure has a high spatial resolution, which is verified by X-ray image performance of the scintillator. The proposed scintillator has lower X-ray sensitivity than a nonstructured scintillator but almost two times higher spatial resolution at high spatial frequency. Through evaluation of the X-ray performance of the fabricated scintillators, we confirm that the microdome-grooved scintillator can be applied to next-generation flexible digital radiography systems requiring high spatial resolution.
N2 Temperature of Vibration instrument for sounding rocket observation in the lower thermosphere
NASA Astrophysics Data System (ADS)
Kurihara, J.; Iwagami, N.; Oyama, K.-I.
2013-11-01
The N2 Temperature of Vibration (NTV) instrument was developed to study energetics and structure of the lower thermosphere, applying the Electron Beam Fluorescence (EBF) technique to measurements of vibrational temperature, rotational temperature and number density of atmospheric N2. The sounding rocket experiments using this instrument have been conducted four times, including one failure of the electron gun. Aerodynamic effects on the measurement caused by the supersonic motion of the rocket were analyzed quantitatively using three-dimensional simulation of Direct Simulation Monte Carlo (DSMC) method, and the absolute density profile was obtained through the correction of the spin modulation.
NASA Astrophysics Data System (ADS)
Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna
2013-08-01
Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
Efficient Monte Carlo Methods for Biomolecular Simulations.
NASA Astrophysics Data System (ADS)
Bouzida, Djamal
A new approach to efficient Monte Carlo simulations of biological molecules is presented. By relaxing the usual restriction to Markov processes, we are able to optimize performance while dealing directly with the inhomogeneity and anisotropy inherent in these systems. The advantage of this approach is that we can introduce a wide variety of Monte Carlo moves to deal with complicated motions of the molecule, while maintaining full optimization at every step. This enables the use of a variety of collective rotational moves that relax long-wavelength modes. We were able to show by explicit simulations that the resulting algorithms substantially increase the speed of the simulation while reproducing the correct equilibrium behavior. This approach is particularly intended for simulations of macromolecules, although we expect it to be useful in other situations. The dynamic optimization of the new Monte Carlo methods makes them very suitable for simulated annealing experiments on all systems whose state space is continuous in general, and to the protein folding problem in particular. We introduce an efficient annealing schedule using preferential bias moves. Our simulated annealing experiments yield structures whose free energies were lower than the equilibrated X-ray structure, which leads us to believe that the empirical energy function used does not fully represent the interatomic interactions. Furthermore, we believe that the largest discrepancies involve the solvent effects in particular.
Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT
NASA Astrophysics Data System (ADS)
Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.
2007-03-01
In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.
Local normalization: Uncovering correlations in non-stationary financial time series
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Guhr, Thomas
2010-09-01
The measurement of correlations between financial time series is of vital importance for risk management. In this paper we address an estimation error that stems from the non-stationarity of the time series. We put forward a method to rid the time series of local trends and variable volatility, while preserving cross-correlations. We test this method in a Monte Carlo simulation, and apply it to empirical data for the S&P 500 stocks.
Andrew D. Richardson; David Y. Hollinger; David Y. Hollinger
2005-01-01
Whether the goal is to fill gaps in the flux record, or to extract physiological parameters from eddy covariance data, researchers are frequently interested in fitting simple models of ecosystem physiology to measured data. Presently, there is no consensus on the best models to use, or the ideal optimization criteria. We demonstrate that, given our estimates of the...
Topics in computational physics
NASA Astrophysics Data System (ADS)
Monville, Maura Edelweiss
Computational Physics spans a broad range of applied fields extending beyond the border of traditional physics tracks. Demonstrated flexibility and capability to switch to a new project, and pick up the basics of the new field quickly, are among the essential requirements for a computational physicist. In line with the above mentioned prerequisites, my thesis described the development and results of two computational projects belonging to two different applied science areas. The first project is a Materials Science application. It is a prescription for an innovative nano-fabrication technique that is built out of two other known techniques. The preliminary results of the simulation of this novel nano-patterning fabrication method show an average improvement, roughly equal to 18%, with respect to the single techniques it draws on. The second project is a Homeland Security application aimed at preventing smuggling of nuclear material at ports of entry. It is concerned with a simulation of an active material interrogation system based on the analysis of induced photo-nuclear reactions. This project consists of a preliminary evaluation of the photo-fission implementation in the more robust radiation transport Monte Carlo codes, followed by the customization and extension of MCNPX, a Monte Carlo code developed in Los Alamos National Laboratory, and MCNP-PoliMi. The final stage of the project consists of testing the interrogation system against some real world scenarios, for the purpose of determining the system's reliability, material discrimination power, and limitations.
Accelerated Monte Carlo Methods for Coulomb Collisions
NASA Astrophysics Data System (ADS)
Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce
2014-03-01
We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.
Von Dreele, Robert
2017-08-29
One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less
Satake, S; Park, J-K; Sugama, H; Kanno, R
2011-07-29
Neoclassical toroidal viscosities (NTVs) in tokamaks are investigated using a δf Monte Carlo simulation, and are successfully verified with a combined analytic theory over a wide range of collisionality. A Monte Carlo simulation has been required in the study of NTV since the complexities in guiding-center orbits of particles and their collisions cannot be fully investigated by any means of analytic theories alone. Results yielded the details of the complex NTV dependency on particle precessions and collisions, which were predicted roughly in a combined analytic theory. Both numerical and analytic methods can be utilized and extended based on these successful verifications.
Duggan, Dennis M
2004-12-01
Improved cross-sections in a new version of the Monte-Carlo N-particle (MCNP) code may eliminate discrepancies between radial dose functions (as defined by American Association of Physicists in Medicine Task Group 43) derived from Monte-Carlo simulations of low-energy photon-emitting brachytherapy sources and those from measurements on the same sources with thermoluminescent dosimeters. This is demonstrated for two 125I brachytherapy seed models, the Implant Sciences Model ISC3500 (I-Plant) and the Amersham Health Model 6711, by simulating their radial dose functions with two versions of MCNP, 4c2 and 5.
Modelling and analysis of solar cell efficiency distributions
NASA Astrophysics Data System (ADS)
Wasmer, Sven; Greulich, Johannes
2017-08-01
We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.
Objective fitting of hemoglobin dynamics in traumatic bruises based on temperature depth profiling
NASA Astrophysics Data System (ADS)
Vidovič, Luka; Milanič, Matija; Majaron, Boris
2014-02-01
Pulsed photothermal radiometry (PPTR) allows noninvasive measurement of laser-induced temperature depth profiles. The obtained profiles provide information on depth distribution of absorbing chromophores, such as melanin and hemoglobin. We apply this technique to objectively characterize mass diffusion and decomposition rate of extravasated hemoglobin during the bruise healing process. In present study, we introduce objective fitting of PPTR data obtained over the course of the bruise healing process. By applying Monte Carlo simulation of laser energy deposition and simulation of the corresponding PPTR signal, quantitative analysis of underlying bruise healing processes is possible. Introduction of objective fitting enables an objective comparison between the simulated and experimental PPTR signals. In this manner, we avoid reconstruction of laser-induced depth profiles and thus inherent loss of information in the process. This approach enables us to determine the value of hemoglobin mass diffusivity, which is controversial in existing literature. Such information will be a valuable addition to existing bruise age determination techniques.
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367
NASA Technical Reports Server (NTRS)
Gastellu-Etchegorry, Jean-Philippe; Yin, Tiangang; Lauret, Nicolas; Grau, Eloi; Rubio, Jeremy; Cook, Bruce D.; Morton, Douglas C.; Sun, Guoqing
2016-01-01
Light Detection And Ranging (LiDAR) provides unique data on the 3-D structure of atmosphere constituents and the Earth's surface. Simulating LiDAR returns for different laser technologies and Earth scenes is fundamental for evaluating and interpreting signal and noise in LiDAR data. Different types of models are capable of simulating LiDAR waveforms of Earth surfaces. Semi-empirical and geometric models can be imprecise because they rely on simplified simulations of Earth surfaces and light interaction mechanisms. On the other hand, Monte Carlo ray tracing (MCRT) models are potentially accurate but require long computational time. Here, we present a new LiDAR waveform simulation tool that is based on the introduction of a quasi-Monte Carlo ray tracing approach in the Discrete Anisotropic Radiative Transfer (DART) model. Two new approaches, the so-called "box method" and "Ray Carlo method", are implemented to provide robust and accurate simulations of LiDAR waveforms for any landscape, atmosphere and LiDAR sensor configuration (view direction, footprint size, pulse characteristics, etc.). The box method accelerates the selection of the scattering direction of a photon in the presence of scatterers with non-invertible phase function. The Ray Carlo method brings traditional ray-tracking into MCRT simulation, which makes computational time independent of LiDAR field of view (FOV) and reception solid angle. Both methods are fast enough for simulating multi-pulse acquisition. Sensitivity studies with various landscapes and atmosphere constituents are presented, and the simulated LiDAR signals compare favorably with their associated reflectance images and Laser Vegetation Imaging Sensor (LVIS) waveforms. The LiDAR module is fully integrated into DART, enabling more detailed simulations of LiDAR sensitivity to specific scene elements (e.g., atmospheric aerosols, leaf area, branches, or topography) and sensor configuration for airborne or satellite LiDAR sensors.
Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan
2013-01-01
In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493
Fast animation of lightning using an adaptive mesh.
Kim, Theodore; Lin, Ming C
2007-01-01
We present a fast method for simulating, animating, and rendering lightning using adaptive grids. The "dielectric breakdown model" is an elegant algorithm for electrical pattern formation that we extend to enable animation of lightning. The simulation can be slow, particularly in 3D, because it involves solving a large Poisson problem. Losasso et al. recently proposed an octree data structure for simulating water and smoke, and we show that this discretization can be applied to the problem of lightning simulation as well. However, implementing the incomplete Cholesky conjugate gradient (ICCG) solver for this problem can be daunting, so we provide an extensive discussion of implementation issues. ICCG solvers can usually be accelerated using "Eisenstat's trick," but the trick cannot be directly applied to the adaptive case. Fortunately, we show that an "almost incomplete Cholesky" factorization can be computed so that Eisenstat's trick can still be used. We then present a fast rendering method based on convolution that is competitive with Monte Carlo ray tracing but orders of magnitude faster, and we also show how to further improve the visual results using jittering.
A Monte Carlo simulation study of associated liquid crystals
NASA Astrophysics Data System (ADS)
Berardi, R.; Fehervari, M.; Zannoni, C.
We have performed a Monte Carlo simulation study of a system of ellipsoidal particles with donor-acceptor sites modelling complementary hydrogen-bonding groups in real molecules. We have considered elongated Gay-Berne particles with terminal interaction sites allowing particles to associate and form dimers. The changes in the phase transitions and in the molecular organization and the interplay between orientational ordering and dimer formation are discussed. Particle flip and dimer moves have been used to increase the convergency rate of the Monte Carlo (MC) Markov chain.
Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories
NASA Technical Reports Server (NTRS)
Olds, John; Way, David
2001-01-01
Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial-and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.
Paganetti, H; Jiang, H; Lee, S Y; Kooy, H M
2004-07-01
Monte Carlo dosimetry calculations are essential methods in radiation therapy. To take full advantage of this tool, the beam delivery system has to be simulated in detail and the initial beam parameters have to be known accurately. The modeling of the beam delivery system itself opens various areas where Monte Carlo calculations prove extremely helpful, such as for design and commissioning of a therapy facility as well as for quality assurance verification. The gantry treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH) were modeled in detail using the GEANT4.5.2 Monte Carlo code. For this purpose, various novel solutions for simulating irregular shaped objects in the beam path, like contoured scatterers, patient apertures or patient compensators, were found. The four-dimensional, in time and space, simulation of moving parts, such as the modulator wheel, was implemented. Further, the appropriate physics models and cross sections for proton therapy applications were defined. We present comparisons between measured data and simulations. These show that by modeling the treatment nozzle with millimeter accuracy, it is possible to reproduce measured dose distributions with an accuracy in range and modulation width, in the case of a spread-out Bragg peak (SOBP), of better than 1 mm. The excellent agreement demonstrates that the simulations can even be used to generate beam data for commissioning treatment planning systems. The Monte Carlo nozzle model was used to study mechanical optimization in terms of scattered radiation and secondary radiation in the design of the nozzles. We present simulations on the neutron background. Further, the Monte Carlo calculations supported commissioning efforts in understanding the sensitivity of beam characteristics and how these influence the dose delivered. We present the sensitivity of dose distributions in water with respect to various beam parameters and geometrical misalignments. This allows the definition of tolerances for quality assurance and the design of quality assurance procedures.
Light fluence dosimetry in lung-simulating cavities
NASA Astrophysics Data System (ADS)
Zhu, Timothy C.; Kim, Michele M.; Padawer, Jonah; Dimofte, Andreea; Potasek, Mary; Beeson, Karl; Parilov, Evgueni
2018-02-01
Accurate light dosimery is critical to ensure consistent outcome for pleural photodynamic therapy (pPDT). Ellipsoid shaped cavities with different sizes surrounded by turbid medium are used to simulate the intracavity lung geometry. An isotropic light source is introduced and surrounded by turbid media. Direct measurements of light fluence rate were compared to Monte Carlo simulated values on the surface of the cavities for various optical properties. The primary component of the light was determined by measurements performed in air in the same geometry. The scattered component was found by submerging the air-filled cavity in scattering media (Intralipid) and absorbent media (ink). The light source was located centrally with the azimuthal angle, but placed in two locations (vertically centered and 2 cm below the center) for measurements. Light fluence rate was measured using isotropic detectors placed at various angles on the ellipsoid surface. The measurements and simulations show that the scattered dose is uniform along the surface of the intracavity ellipsoid geometries in turbid media. One can express the light fluence rate empirically as φ =4S/As*Rd/(1- Rd), where Rd is the diffuse reflectance, As is the surface area, and S is the source power. The measurements agree with this empirical formula to within an uncertainty of 10% for the range of optical properties studied. GPU voxel-based Monte-Carlo simulation is performed to compare with measured results. This empirical formula can be applied to arbitrary geometries, such as the pleural or intraperitoneal cavity.
Raman Monte Carlo simulation for light propagation for tissue with embedded objects
NASA Astrophysics Data System (ADS)
Periyasamy, Vijitha; Jaafar, Humaira Bte; Pramanik, Manojit
2018-02-01
Monte Carlo (MC) stimulation is one of the prominent simulation technique and is rapidly becoming the model of choice to study light-tissue interaction. Monte Carlo simulation for light transport in multi-layered tissue (MCML) is adapted and modelled with different geometry by integrating embedded objects of various shapes (i.e., sphere, cylinder, cuboid and ellipsoid) into the multi-layered structure. These geometries would be useful in providing a realistic tissue structure such as modelling for lymph nodes, tumors, blood vessels, head and other simulation medium. MC simulations were performed on various geometric medium. Simulation of MCML with embedded object (MCML-EO) was improvised for propagation of the photon in the defined medium with Raman scattering. The location of Raman photon generation is recorded. Simulations were experimented on a modelled breast tissue with tumor (spherical and ellipsoidal) and blood vessels (cylindrical). Results were presented in both A-line and B-line scans for embedded objects to determine spatial location where Raman photons were generated. Studies were done for different Raman probabilities.
Statistical analysis of flight times for space shuttle ferry flights
NASA Technical Reports Server (NTRS)
Graves, M. E.; Perlmutter, M.
1974-01-01
Markov chain and Monte Carlo analysis techniques are applied to the simulated Space Shuttle Orbiter Ferry flights to obtain statistical distributions of flight time duration between Edwards Air Force Base and Kennedy Space Center. The two methods are compared, and are found to be in excellent agreement. The flights are subjected to certain operational and meteorological requirements, or constraints, which cause eastbound and westbound trips to yield different results. Persistence of events theory is applied to the occurrence of inclement conditions to find their effect upon the statistical flight time distribution. In a sensitivity test, some of the constraints are varied to observe the corresponding changes in the results.
Play It Again: Teaching Statistics with Monte Carlo Simulation
ERIC Educational Resources Information Center
Sigal, Matthew J.; Chalmers, R. Philip
2016-01-01
Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…
Monte Carlo simulation of MOSFET dosimeter for electron backscatter using the GEANT4 code.
Chow, James C L; Leung, Michael K K
2008-06-01
The aim of this study is to investigate the influence of the body of the metal-oxide-semiconductor field effect transistor (MOSFET) dosimeter in measuring the electron backscatter from lead. The electron backscatter factor (EBF), which is defined as the ratio of dose at the tissue-lead interface to the dose at the same point without the presence of backscatter, was calculated by the Monte Carlo simulation using the GEANT4 code. Electron beams with energies of 4, 6, 9, and 12 MeV were used in the simulation. It was found that in the presence of the MOSFET body, the EBFs were underestimated by about 2%-0.9% for electron beam energies of 4-12 MeV, respectively. The trend of the decrease of EBF with an increase of electron energy can be explained by the small MOSFET dosimeter, mainly made of epoxy and silicon, not only attenuated the electron fluence of the electron beam from upstream, but also the electron backscatter generated by the lead underneath the dosimeter. However, this variation of the EBF underestimation is within the same order of the statistical uncertainties as the Monte Carlo simulations, which ranged from 1.3% to 0.8% for the electron energies of 4-12 MeV, due to the small dosimetric volume. Such small EBF deviation is therefore insignificant when the uncertainty of the Monte Carlo simulation is taken into account. Corresponding measurements were carried out and uncertainties compared to Monte Carlo results were within +/- 2%. Spectra of energy deposited by the backscattered electrons in dosimetric volumes with and without the lead and MOSFET were determined by Monte Carlo simulations. It was found that in both cases, when the MOSFET body is either present or absent in the simulation, deviations of electron energy spectra with and without the lead decrease with an increase of the electron beam energy. Moreover, the softer spectrum of the backscattered electron when lead is present can result in a reduction of the MOSFET response due to stronger recombination in the SiO2 gate. It is concluded that the MOSFET dosimeter performed well for measuring the electron backscatter from lead using electron beams. The uncertainty of EBF determined by comparing the results of Monte Carlo simulations and measurements is well within the accuracy of the MOSFET dosimeter (< +/- 4.2%) provided by the manufacturer.
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Hiratsuka, Tatsumasa; Tanaka, Hideki; Miyahara, Minoru T
2017-01-24
We find the rule of capillary condensation from the metastable state in nanoscale pores based on the transition state theory. The conventional thermodynamic theories cannot achieve it because the metastable capillary condensation inherently includes an activated process. We thus compute argon adsorption isotherms on cylindrical pore models and atomistic silica pore models mimicking the MCM-41 materials by the grand canonical Monte Carlo and the gauge cell Monte Carlo methods and evaluate the rate constant for the capillary condensation by the transition state theory. The results reveal that the rate drastically increases with a small increase in the chemical potential of the system, and the metastable capillary condensation occurs for any mesopores when the rate constant reaches a universal critical value. Furthermore, a careful comparison between experimental adsorption isotherms and the simulated ones on the atomistic silica pore models reveals that the rate constant of the real system also has a universal value. With this finding, we can successfully estimate the experimental capillary condensation pressure over a wide range of temperatures and pore sizes by simply applying the critical rate constant.
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
Comparison of Geant4-DNA simulation of S-values with other Monte Carlo codes
NASA Astrophysics Data System (ADS)
André, T.; Morini, F.; Karamitros, M.; Delorme, R.; Le Loirec, C.; Campos, L.; Champion, C.; Groetz, J.-E.; Fromm, M.; Bordage, M.-C.; Perrot, Y.; Barberet, Ph.; Bernal, M. A.; Brown, J. M. C.; Deleuze, M. S.; Francis, Z.; Ivanchenko, V.; Mascialino, B.; Zacharatou, C.; Bardiès, M.; Incerti, S.
2014-01-01
Monte Carlo simulations of S-values have been carried out with the Geant4-DNA extension of the Geant4 toolkit. The S-values have been simulated for monoenergetic electrons with energies ranging from 0.1 keV up to 20 keV, in liquid water spheres (for four radii, chosen between 10 nm and 1 μm), and for electrons emitted by five isotopes of iodine (131, 132, 133, 134 and 135), in liquid water spheres of varying radius (from 15 μm up to 250 μm). The results have been compared to those obtained from other Monte Carlo codes and from other published data. The use of the Kolmogorov-Smirnov test has allowed confirming the statistical compatibility of all simulation results.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Simulations of magnetic hysteresis loops at high temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plumer, M. L.; Whitehead, J. P.; Fal, T. J.
2014-09-28
The kinetic Monte-Carlo algorithm as well as standard micromagnetics are used to simulate MH loops of high anisotropy magnetic recording media at both short and long time scales over a wide range of temperatures relevant to heat-assisted magnetic recording. Microscopic parameters, common to both methods, were determined by fitting to experimental data on single-layer FePt-based media that uses the Magneto-Optic Kerr effect with a slow sweep rate of 700 Oe/s. Saturation moment, uniaxial anisotropy, and exchange constants are given an intrinsic temperature dependence based on published atomistic simulations of FePt grains with an effective Curie temperature of 680 K. Ourmore » results show good agreement between micromagnetics and kinetic Monte Carlo results over a wide range of sweep rates. Loops at the slow experimental sweep rates are found to become more square-shaped, with an increasing slope, as temperature increases from 300 K. These effects also occur at higher sweep rates, typical of recording speeds, but are much less pronounced. These results demonstrate the need for accurate determination of intrinsic thermal properties of future recording media as input to micromagnetic models as well as the sensitivity of the switching behavior of thin magnetic films to applied field sweep rates at higher temperatures.« less
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.
Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe
2015-08-07
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
Terrill Vosbein, Heidi A; Boatz, Jerry A; Kenney, John W
2005-12-22
The moment analysis method (MA) has been tested for the case of 2S --> 2P ([core]ns1 --> [core]np1) transitions of alkali metal atoms (M) doped into cryogenic rare gas (Rg) matrices using theoretically validated simulations. Theoretical/computational M/Rg system models are constructed with precisely defined parameters that closely mimic known M/Rg systems. Monte Carlo (MC) techniques are then employed to generate simulated absorption and magnetic circular dichroism (MCD) spectra of the 2S --> 2P M/Rg transition to which the MA method can be applied with the goal of seeing how effective the MA method is in re-extracting the M/Rg system parameters from these known simulated systems. The MA method is summarized in general, and an assessment is made of the use of the MA method in the rigid shift approximation typically used to evaluate M/Rg systems. The MC-MCD simulation technique is summarized, and validating evidence is presented. The simulation results and the assumptions used in applying MA to M/Rg systems are evaluated. The simulation results on Na/Ar demonstrate that the MA method does successfully re-extract the 2P spin-orbit coupling constant and Landé g-factor values initially used to build the simulations. However, assigning physical significance to the cubic and noncubic Jahn-Teller (JT) vibrational mode parameters in cryogenic M/Rg systems is not supported.
A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14
NASA Astrophysics Data System (ADS)
Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.
2000-03-01
Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.
Front panel engineering with CAD simulation tool
NASA Astrophysics Data System (ADS)
Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe
1999-04-01
THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.
Wada, Takao; Ueda, Noriaki
2013-01-01
The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature. PMID:23674843
Simulating propagation of coherent light in random media using the Fredholm type integral equation
NASA Astrophysics Data System (ADS)
Kraszewski, Maciej; Pluciński, Jerzy
2017-06-01
Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.
Schonberger, Robert B; Gilbertsen, Todd; Dai, Feng
2013-01-01
Objective(s) Observational database research frequently relies on imperfect administrative markers to determine comorbid status, and it is difficult to infer to what extent the associated misclassification impacts validity in multivariable analyses. The effect that imperfect markers of disease will have on validity in situations where researchers attempt to match populations that have strong baseline health differences is underemphasized as a limitation in some otherwise high-quality observational studies. The present simulations were designed as a quantitative demonstration of the importance of this common and underappreciated issue. Design Two groups of Monte Carlo simulations were performed. The first demonstrated the degree to which controlling for a series of imperfect markers of disease between different populations taking 2 hypothetically harmless drugs would lead to spurious associations between drug assignment and mortality. The second Monte Carlo simulation applied this principle to a recent study in the field of anesthesiology that purported to show increased perioperative mortality in patients taking metoprolol versus atenolol. Setting/Participants/Interventions None. Measurements and Main Results Simulation 1: High type 1 error (ie, false positive findings of an independent association between drug assignment and mortality) was observed as sensitivity and specificity declined and as systematic differences in disease prevalence increased. Simulation 2: Propensity score matching across several imperfect markers was unlikely to eliminate important baseline health disparities in the referenced study. Conclusions In situations where large baseline health disparities exist between populations, matching on imperfect markers of disease may result in strong bias away from the null hypothesis. PMID:23962461
An adaptive bias - hybrid MD/kMC algorithm for protein folding and aggregation.
Peter, Emanuel K; Shea, Joan-Emma
2017-07-05
In this paper, we present a novel hybrid Molecular Dynamics/kinetic Monte Carlo (MD/kMC) algorithm and apply it to protein folding and aggregation in explicit solvent. The new algorithm uses a dynamical definition of biases throughout the MD component of the simulation, normalized in relation to the unbiased forces. The algorithm guarantees sampling of the underlying ensemble in dependency of one average linear coupling factor 〈α〉 τ . We test the validity of the kinetics in simulations of dialanine and compare dihedral transition kinetics with long-time MD-simulations. We find that for low 〈α〉 τ values, kinetics are in good quantitative agreement. In folding simulations of TrpCage and TrpZip4 in explicit solvent, we also find good quantitative agreement with experimental results and prior MD/kMC simulations. Finally, we apply our algorithm to study growth of the Alzheimer Amyloid Aβ 16-22 fibril by monomer addition. We observe two possible binding modes, one at the extremity of the fibril (elongation) and one on the surface of the fibril (lateral growth), on timescales ranging from ns to 8 μs.
NASA Astrophysics Data System (ADS)
Halim, A. A. A.; Laili, M. H.; Salikin, M. S.; Rusop, M.
2018-05-01
Monte Carlo Simulation has advanced their quantification based on number of the photon counting to solve the propagation of light inside the tissues including the absorption, scattering coefficient and act as preliminary study for functional near infrared application. The goal of this paper is to identify the optical properties using Monte Carlo simulation for non-invasive functional near infrared spectroscopy (fNIRS) evaluation of penetration depth in human muscle. This paper will describe the NIRS principle and the basis for its proposed used in Monte Carlo simulation which focused on several important parameters include ATP, ADP and relate with blow flow and oxygen content at certain exercise intensity. This will cover the advantages and limitation of such application upon this simulation. This result may help us to prove that our human muscle is transparent to this near infrared region and could deliver a lot of information regarding to the oxygenation level in human muscle. Thus, this might be useful for non-invasive technique for detecting oxygen status in muscle from living people either athletes or working people and allowing a lots of investigation muscle physiology in future.
NASA Astrophysics Data System (ADS)
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Result of Monte-Carlo simulation of electron-photon cascades in lead and layers of lead-scintillator
NASA Technical Reports Server (NTRS)
Wasilewski, A.; Krys, E.
1985-01-01
Results of Monte-Carlo simulation of electromagnetic cascade development in lead and lead-scintillator sandwiches are analyzed. It is demonstrated that the structure function for core approximation is not applicable in the case in which the primary energy is higher than 100 GeV. The simulation data has shown that introducing an inhomogeneous chamber structure results in subsequent reduction of secondary particles.
Koivisto, J; Kiljunen, T; Tapiovaara, M; Wolff, J; Kortesniemi, M
2012-09-01
The aims of this study were to assess the organ and effective dose (International Commission on Radiological Protection (ICRP) 103) resulting from dental cone-beam computerized tomography (CBCT) imaging using a novel metal-oxide semiconductor field-effect transistor (MOSFET) dosimeter device, and to assess the reliability of the MOSFET measurements by comparing the results with Monte Carlo PCXMC simulations. Organ dose measurements were performed using 20 MOSFET dosimeters that were embedded in the 8 most radiosensitive organs in the maxillofacial and neck area. The dose-area product (DAP) values attained from CBCT scans were used for PCXMC simulations. The acquired MOSFET doses were then compared with the Monte Carlo simulations. The effective dose measurements using MOSFET dosimeters yielded, using 0.5-cm steps, a value of 153 μSv and the PCXMC simulations resulted in a value of 136 μSv. The MOSFET dosimeters placed in a head phantom gave results similar to Monte Carlo simulations. Minor vertical changes in the positioning of the phantom had a substantial affect on the overall effective dose. Therefore, the MOSFET dosimeters constitute a feasible method for dose assessment of CBCT units in the maxillofacial region. Copyright © 2012 Elsevier Inc. All rights reserved.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
NASA Astrophysics Data System (ADS)
Croce, Olivier; Hachem, Sabet; Franchisseur, Eric; Marcié, Serge; Gérard, Jean-Pierre; Bordy, Jean-Marc
2012-06-01
This paper presents a dosimetric study concerning the system named "Papillon 50" used in the department of radiotherapy of the Centre Antoine-Lacassagne, Nice, France. The machine provides a 50 kVp X-ray beam, currently used to treat rectal cancers. The system can be mounted with various applicators of different diameters or shapes. These applicators can be fixed over the main rod tube of the unit in order to deliver the prescribed absorbed dose into the tumor with an optimal distribution. We have analyzed depth dose curves and dose profiles for the naked tube and for a set of three applicators. Dose measurements were made with an ionization chamber (PTW type 23342) and Gafchromic films (EBT2). We have also compared the measurements with simulations performed using the Monte Carlo code PENELOPE. Simulations were performed with a detailed geometrical description of the experimental setup and with enough statistics. Results of simulations are made in accordance with experimental measurements and provide an accurate evaluation of the dose delivered. The depths of the 50% isodose in water for the various applicators are 4.0, 6.0, 6.6 and 7.1 mm. The Monte Carlo PENELOPE simulations are in accordance with the measurements for a 50 kV X-ray system. Simulations are able to confirm the measurements provided by Gafchromic films or ionization chambers. Results also demonstrate that Monte Carlo simulations could be helpful to validate the future applicators designed for other localizations such as breast or skin cancers. Furthermore, Monte Carlo simulations could be a reliable alternative for a rapid evaluation of the dose delivered by such a system that uses multiple designs of applicators.
NASA Astrophysics Data System (ADS)
Ustinov, E. A.
2017-01-01
The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.
Effects of glyphosate formulations on the population dynamics of two freshwater cladoceran species.
Reno, U; Doyle, S R; Momo, F R; Regaldo, L; Gagneten, A M
2018-02-05
The general objective of this work is to experimentally assess the effects of acute glyphosate pollution on two freshwater cladoceran species (Daphnia magna and Ceriodaphnia dubia) and to use this information to predict the population dynamics and the potential for recovery of exposed organisms. Five to six concentrations of four formulations of glyphosate (4-Gly) (Eskoba ® , Panzer Gold ® , Roundup Ultramax ® and Sulfosato Touchdown ® ) were evaluated in both cladoceran species through acute tests and 15-day recovery tests in order to estimate the population dynamics of microcrustaceans. The endpoints of the recovery test were: survival, growth (number of molts), fecundity, and the intrinsic population growth rate (r). A matrix population model (MPM) was applied to r of the survivor individuals of the acute tests, followed by a Monte Carlo simulation study. Among the 4-Gly tested, Sulfosato Touchdown ® was the one that showed higher toxicity, and C. dubia was the most sensitive species. The Monte Carlo simulation study showed an average value of λ always <1 for D. magna, indicating that its populations would not be able to survive under natural environmental conditions after an acute Gly exposure between 0.25 and 35 a.e. mg L -1 . The average value of λ for C. dubia was also <1 after exposure to Roundup Ultramax ® : 1.30 and 1.20 for 1.21 and 2.5 mg a.e. L -1 ,respectively. The combined methodology-recovery tests and the later analysis through MPM with a Monte Carlo simulation study-is proposed to integrate key demographic parameters and predict the possible fate of microcrustacean populations after being exposed to acute 4-Gly contamination events.
NASA Astrophysics Data System (ADS)
Guerra, J. G.; Rubiano, J. G.; Winter, G.; Guerra, A. G.; Alonso, H.; Arnedo, M. A.; Tejera, A.; Martel, P.; Bolivar, J. P.
2017-06-01
In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
NASA Astrophysics Data System (ADS)
Kalmykov, N. N.; Ostapchenko, S. S.; Werner, K.
An extensive air shower (EAS) calculation scheme based on cascade equations and some EAS characteristics for energies 1014 -1017 eV are presented. The universal hadronic interaction model NEXUS is employed to provide the necessary data concerning hadron-air collisions. The influence of model assumptions on the longitudinal EAS development is discussed in the framework of the NEXUS and QGSJET models. Applied to EAS simulations, perspectives of combined Monte Carlo and numerical methods are considered.
Implications of a quadratic stream definition in radiative transfer theory.
NASA Technical Reports Server (NTRS)
Whitney, C.
1972-01-01
An explicit definition of the radiation-stream concept is stated and applied to approximate the integro-differential equation of radiative transfer with a set of twelve coupled differential equations. Computational efficiency is enhanced by distributing the corresponding streams in three-dimensional space in a totally symmetric way. Polarization is then incorporated in this model. A computer program based on the model is briefly compared with a Monte Carlo program for simulation of horizon scans of the earth's atmosphere. It is found to be considerably faster.
Heavy Ion Induced Degradation in SiC Schottky Diodes: Bias and Energy Deposition Dependence
NASA Technical Reports Server (NTRS)
Javanainen, Arto; Galloway, Kenneth F.; Nicklaw, Christopher; Bosser, Alexandre L.; Ferlet-Cavrois, Veronique; Lauenstein, Jean-Marie; Pintacuda, Francesco; Reed, Robert A.; Schrimpf, Ronald D.; Weller, Robert A.;
2016-01-01
Experimental results on ion-induced leakage current increase in 4H-SiC Schottky power diodes are presented. Monte Carlo and TCAD simulations show that degradation is due to the synergy between applied bias and ion energy deposition. This degradation is possibly related to thermal spot annealing at the metal semiconductor interface. This thermal annealing leads to an inhomogeneity of the Schottky barrier that could be responsible for the increase leakage current as a function of fluence.
A Fast Monte Carlo Simulation for the International Linear Collider Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furse, D.; /Georgia Tech
2005-12-15
The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included in the SLAC ILC group's org.lcsim package, reads in standard model or SUSY events in STDHEP file format, stochastically simulates the blurring in physics measurements caused by intrinsic detector error, and writes out an LCIO format file containing a set of final particles statistically similar to those that would have found by a full Monte Carlo simulation. In addition to the reconstructed particles themselves, descriptionsmore » of the calorimeter hit clusters and tracks that these particles would have produced are also included in the LCIO output. These output files can then be put through various analysis codes in order to characterize the effectiveness of a hypothetical detector at extracting relevant physical information about an event. Such a tool is extremely useful in preliminary detector research and development, as full simulations are extremely cumbersome and taxing on processor resources; a fast, efficient Monte Carlo can facilitate and even make possible detector physics studies that would be very impractical with the full simulation by sacrificing what is in many cases inappropriate attention to detail for valuable gains in time required for results.« less
NASA Astrophysics Data System (ADS)
Hidayat, Iki; Sutopo; Pratama, Heru Berian
2017-12-01
The Kerinci geothermal field is one phase liquid reservoir system in the Kerinci District, western part of Jambi Province. In this field, there are geothermal prospects that identified by the heat source up flow inside a National Park area. Kerinci field was planned to develop 1×55 MWe by Pertamina Geothermal Energy. To define reservoir characterization, the numerical simulation of Kerinci field is developed by using TOUGH2 software with information from conceptual model. The pressure and temperature profile well data of KRC-B1 are validated with simulation data to reach natural state condition. The result of the validation is suitable matching. Based on natural state simulation, the resource assessment of Kerinci geothermal field is estimated by using Monte Carlo simulation with the result P10-P50-P90 are 49.4 MW, 64.3 MW and 82.4 MW respectively. This paper is the first study of resource assessment that has been estimated successfully in Kerinci Geothermal Field using numerical simulation coupling with Monte carlo simulation.
Bayesian Inference on Proportional Elections
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Bayesian inference on proportional elections.
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
NASA Astrophysics Data System (ADS)
Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin
2017-07-01
The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.
Theoretical Grounds for the Propagation of Uncertainties in Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Saracco, Paolo; Pia, Maria Grazia; Batic, Matej
2014-04-01
We introduce a theoretical framework for the calculation of uncertainties affecting observables produced by Monte Carlo particle transport, which derive from uncertainties in physical parameters input into simulation. The theoretical developments are complemented by a heuristic application, which illustrates the method of calculation in a streamlined simulation environment.
Estimating Uncertainty in N2O Emissions from US Cropland Soils
USDA-ARS?s Scientific Manuscript database
A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...
Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study
ERIC Educational Resources Information Center
Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick
2017-01-01
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Monte Carlo simulation models of breeding-population advancement.
J.N. King; G.R. Johnson
1993-01-01
Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...
Drusano, G. L.; Preston, S. L.; Gotfried, M. H.; Danziger, L. H.; Rodvold, K. A.
2002-01-01
Levofloxacin was administered orally to steady state to volunteers randomly in doses of 500 and 750 mg. Plasma and epithelial lining fluid (ELF) samples were obtained at 4, 12, and 24 h after the final dose. All data were comodeled in a population pharmacokinetic analysis employing BigNPEM. Penetration was evaluated from the population mean parameter vector values and from the results of a 1,000-subject Monte Carlo simulation. Evaluation from the population mean values demonstrated a penetration ratio (ELF/plasma) of 1.16. The Monte Carlo simulation provided a measure of dispersion, demonstrating a mean ratio of 3.18, with a median of 1.43 and a 95% confidence interval of 0.14 to 19.1. Population analysis with Monte Carlo simulation provides the best and least-biased estimate of penetration. It also demonstrates clearly that we can expect differences in penetration between patients. This analysis did not deal with inflammation, as it was performed in volunteers. The influence of lung pathology on penetration needs to be examined. PMID:11796385
Geant4 hadronic physics for space radiation environment.
Ivantchenko, Anton V; Ivanchenko, Vladimir N; Molina, Jose-Manuel Quesada; Incerti, Sebastien L
2012-01-01
To test and to develop Geant4 (Geometry And Tracking version 4) Monte Carlo hadronic models with focus on applications in a space radiation environment. The Monte Carlo simulations have been performed using the Geant4 toolkit. Binary (BIC), its extension for incident light ions (BIC-ion) and Bertini (BERT) cascades were used as main Monte Carlo generators. For comparisons purposes, some other models were tested too. The hadronic testing suite has been used as a primary tool for model development and validation against experimental data. The Geant4 pre-compound (PRECO) and de-excitation (DEE) models were revised and improved. Proton, neutron, pion, and ion nuclear interactions were simulated with the recent version of Geant4 9.4 and were compared with experimental data from thin and thick target experiments. The Geant4 toolkit offers a large set of models allowing effective simulation of interactions of particles with matter. We have tested different Monte Carlo generators with our hadronic testing suite and accordingly we can propose an optimal configuration of Geant4 models for the simulation of the space radiation environment.
Heat And Mass Transfer Analysis of a Film Evaporative MEMS Tunable Array
NASA Astrophysics Data System (ADS)
O'Neill, William J.
This thesis details the heat and mass transfer analysis of a MEMs microthruster designed to provide propulsive, attitude control and thermal control capabilities to a cubesat. This thruster is designed to function by retaining water as a propellant and applying resistive heating in order to increase the temperature of the liquid-vapor interface to either increase evaporation or induce boiling to regulate mass flow. The resulting vapor is then expanded out of a diverging nozzle to produce thrust. Because of the low operating pressure and small length scale of this thruster, unique forms of mass transfer analysis such as non-continuum gas flow were modeled using the Direct Simulation Monte Carlo method. Continuum fluid/thermal simulations using COMSOL Multiphysics have been applied to model heat and mass transfer in the solid and liquid portions of the thruster. The two methods were coupled through variables at the liquid-vapor interface and solved iteratively by the bisection method. The simulations presented in this thesis confirm the thermal valving concept. It is shown that when power is applied to the thruster there is a nearly linear increase in mass flow and thrust. Thus, mass flow can be regulated by regulating the applied power. This concept can also be used as a thermal control device for spacecraft.
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
NASA Astrophysics Data System (ADS)
Luckhurst, G. R.; Saielli, G.
2000-03-01
Molecular field theory predicts the induction of a smectic A phase by the application of a field, either magnetic or electric, to a nematic phase. This intriguing behavior results from an enhancement of the orientational order which is coupled to the translational order and so shifts the smectic A-nematic transition. To test this prediction we have investigated a system of Gay-Berne mesogenic molecules subject to an applied field of second rank using isothermal-isobaric Monte Carlo simulations. The results of our calculations are compared with the Kventsel-Luckhurst-Zewdie molecular field theory of smectogens, modified to include the effect of an external field. We have also used the simulations to explore the possibility of inducing more ordered smectic phases with stronger fields.
Improved scatter correction with factor analysis for planar and SPECT imaging
NASA Astrophysics Data System (ADS)
Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw
2017-09-01
Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.
2016-04-01
noise, and energy relaxation for doped zinc-oxide and structured ZnO transistor materials with a 2-D electron gas (2DEG) channel subjected to a strong...function on the time delay. Closed symbols represent the Monte Carlo data with hot-phonon effect at different electron gas density: 1•1017 cm-3...Monte Carlo simulation is performed for electron gas density of 1•1018 cm-3. Figure 18. Monte Carlo simulation of density-dependent hot-electron energy
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Y; Singh, H; Islam, M
2014-06-01
Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
SU-F-T-657: In-Room Neutron Dose From High Energy Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christ, D; Ding, G
Purpose: To estimate neutron dose inside the treatment room from photodisintegration events in high energy photon beams using Monte Carlo simulations and experimental measurements. Methods: The Monte Carlo code MCNP6 was used for the simulations. An Eberline ESP-1 Smart Portable Neutron Detector was used to measure neutron dose. A water phantom was centered at isocenter on the treatment couch, and the detector was placed near the phantom. A Varian 2100EX linear accelerator delivered an 18MV open field photon beam to the phantom at 400MU/min, and a camera captured the detector readings. The experimental setup was modeled in the Monte Carlomore » simulation. The source was modeled for two extreme cases: a) hemispherical photon source emitting from the target and b) cone source with an angle of the primary collimator cone. The model includes the target, primary collimator, flattening filter, secondary collimators, water phantom, detector and concrete walls. Energy deposition tallies were measured for neutrons in the detector and for photons at the center of the phantom. Results: For an 18MV beam with an open 10cm by 10cm field and the gantry at 180°, the Monte Carlo simulations predict the neutron dose in the detector to be 0.11% of the photon dose in the water phantom for case a) and 0.01% for case b). The measured neutron dose is 0.04% of the photon dose. Considering the range of neutron dose predicted by Monte Carlo simulations, the calculated results are in good agreement with measurements. Conclusion: We calculated in-room neutron dose by using Monte Carlo techniques, and the predicted neutron dose is confirmed by experimental measurements. If we remodel the source as an electron beam hitting the target for a more accurate representation of the bremsstrahlung fluence, it is feasible that the Monte Carlo simulations can be used to help in shielding designs.« less
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT
Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.
2011-01-01
Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896
LLNL Mercury Project Trinity Open Science Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brantley, Patrick; Dawson, Shawn; McKinley, Scott
2016-04-20
The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less
2016-12-01
KS and AD Statistical Power via Monte Carlo Simulation Statistical power is the probability of correctly rejecting the null hypothesis when the...Select a caveat DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Determining the Statistical Power...real-world data to test the accuracy of the simulation. Statistical comparison of these metrics can be necessary when making such a determination
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
NASA Astrophysics Data System (ADS)
Sharma, Diksha; Badal, Andreu; Badano, Aldo
2012-04-01
The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result, allows us to efficiently simulate large area detectors.
Quantifying the risk of extreme aviation accidents
NASA Astrophysics Data System (ADS)
Das, Kumer Pial; Dey, Asim Kumer
2016-12-01
Air travel is considered a safe means of transportation. But when aviation accidents do occur they often result in fatalities. Fortunately, the most extreme accidents occur rarely. However, 2014 was the deadliest year in the past decade causing 111 plane crashes, and among them worst four crashes cause 298, 239, 162 and 116 deaths. In this study, we want to assess the risk of the catastrophic aviation accidents by studying historical aviation accidents. Applying a generalized Pareto model we predict the maximum fatalities from an aviation accident in future. The fitted model is compared with some of its competitive models. The uncertainty in the inferences are quantified using simulated aviation accident series, generated by bootstrap resampling and Monte Carlo simulations.
Consistent multiphysics simulation of a central tower CSP plant as applied to ISTORE
NASA Astrophysics Data System (ADS)
Votyakov, Evgeny V.; Papanicolas, Costas N.
2017-06-01
We present a unified consistent multiphysics approach to model a central tower CSP plant. The framework for the model includes Monte Carlo ray tracing (RT) and computational fluid dynamics (CFD) components utilizing the OpenFOAM C++ software library. The RT part works effectively with complex surfaces of engineering design given in CAD formats. The CFD simulation, which is based on 3D Navier-Stokes equations, takes into account all possible heat transfer mechanisms: radiation, conduction, and convection. Utilizing this package, the solar field of the experimental Platform for Research, Observation, and TEchnological Applications in Solar Energy (PROTEAS) and the Integrated STOrage and Receiver (ISTORE), developed at the Cyprus Institute, are being examined.
Diagnosing Undersampling Biases in Monte Carlo Eigenvalue and Flux Tally Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M.; Rearden, Bradley T.; Marshall, William J.
2017-02-08
Here, this study focuses on understanding the phenomena in Monte Carlo simulations known as undersampling, in which Monte Carlo tally estimates may not encounter a sufficient number of particles during each generation to obtain unbiased tally estimates. Steady-state Monte Carlo simulations were performed using the KENO Monte Carlo tools within the SCALE code system for models of several burnup credit applications with varying degrees of spatial and isotopic complexities, and the incidence and impact of undersampling on eigenvalue and flux estimates were examined. Using an inadequate number of particle histories in each generation was found to produce a maximum bias of ~100 pcm in eigenvalue estimates and biases that exceeded 10% in fuel pin flux tally estimates. Having quantified the potential magnitude of undersampling biases in eigenvalue and flux tally estimates in these systems, this study then investigated whether Markov Chain Monte Carlo convergence metrics could be integrated into Monte Carlo simulations to predict the onset and magnitude of undersampling biases. Five potential metrics for identifying undersampling biases were implemented in the SCALE code system and evaluated for their ability to predict undersampling biases by comparing the test metric scores with the observed undersampling biases. Finally, of the five convergence metrics that were investigated, three (the Heidelberger-Welch relative half-width, the Gelman-Rubin more » $$\\hat{R}_c$$ diagnostic, and tally entropy) showed the potential to accurately predict the behavior of undersampling biases in the responses examined.« less
Use of Fluka to Create Dose Calculations
NASA Technical Reports Server (NTRS)
Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John
2012-01-01
Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
Yip, Mary; Saripan, M Iqbal; Wells, Kevin; Bradley, David A
2015-01-01
Detection of buried improvised explosive devices (IEDs) is a delicate task, leading to a need to develop sensitive stand-off detection technology. The shape, composition and size of the IEDs can be expected to be revised over time in an effort to overcome increasingly sophisticated detection methods. As an example, for the most part, landmines are found through metal detection which has led to increasing use of non-ferrous materials such as wood or plastic containers for chemical based explosives being developed. Monte Carlo simulations have been undertaken considering three different commercially available detector materials (hyperpure-Ge (HPGe), lanthanum(III) bromide (LaBr) and thallium activated sodium iodide (NaI(Tl)), applied at a stand-off distance of 50 cm from the surface and burial depths of 0, 5 and 10 cm, with sand as the obfuscating medium. Target materials representing medium density wood and mild steel have been considered. Each detector has been modelled as a 10 cm thick cylinder with a 20 cm diameter. It appears that HPGe represents the most promising detector for this application. Although it was not the highest density material studied, its excellent energy resolving capability leads to the highest quality spectra from which detection decisions can be inferred. The simulation work undertaken here suggests that a vehicle-born threat detection system could be envisaged using a single betatron and a series of detectors operating in parallel observing the space directly in front of the vehicle path. Furthermore, results show that non-ferrous materials such as wood can be effectively discerned in such remote-operated detection system, with the potential to apply a signature analysis template matching technique for real-time analysis of such data.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
Su, Peiran; Eri, Qitai; Wang, Qiang
2014-04-10
Optical roughness was introduced into the bidirectional reflectance distribution function (BRDF) model to simulate the reflectance characteristics of thermal radiation. The optical roughness BRDF model stemmed from the influence of surface roughness and wavelength on the ray reflectance calculation. This model was adopted to simulate real metal emissivity. The reverse Monte Carlo method was used to display the distribution of reflectance rays. The numerical simulations showed that the optical roughness BRDF model can calculate the wavelength effect on emissivity and simulate the real metal emissivity variance with incidence angles.
A Monte Carlo Simulation of Brownian Motion in the Freshman Laboratory
ERIC Educational Resources Information Center
Anger, C. D.; Prescott, J. R.
1970-01-01
Describes a dry- lab" experiment for the college freshman laboratory, in which the essential features of Browian motion are given principles, using the Monte Carlo technique. Calculations principles, using the Monte Carlo technique. Calculations are carried out by a computation sheme based on computer language. Bibliography. (LC)
Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.
Resat, H; Mezei, M
1996-09-01
The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marous, L; Muryn, J; Liptak, C
2016-06-15
Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashouf, S; Lai, P; Karotki, A
2014-06-01
Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented usingmore » MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and Engineering Research Council of Canada.« less
APS undulator and wiggler sources: Monte-Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, S.L.; Lai, B.; Viccaro, P.J.
1992-02-01
Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments
S. Healey; P. Patterson; S. Urbanski
2014-01-01
Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...
NASA Astrophysics Data System (ADS)
Eddowes, M. H.; Mills, T. N.; Delpy, D. T.
1995-05-01
A Monte Carlo model of light backscattered from turbid media has been used to simulate the effects of weak localization in biological tissues. A validation technique is used that implies that for the scattering and absorption coefficients and for refractive index mismatches found in tissues, the Monte Carlo method is likely to provide more accurate results than the methods previously used. The model also has the ability to simulate the effects of various illumination profiles and other laboratory-imposed conditions. A curve-fitting routine has been developed that might be used to extract the optical coefficients from the angular intensity profiles seen in experiments on turbid biological tissues, data that could be obtained in vivo.
Radial-based tail methods for Monte Carlo simulations of cylindrical interfaces
NASA Astrophysics Data System (ADS)
Goujon, Florent; Bêche, Bruno; Malfreyt, Patrice; Ghoufi, Aziz
2018-03-01
In this work, we implement for the first time the radial-based tail methods for Monte Carlo simulations of cylindrical interfaces. The efficiency of this method is then evaluated through the calculation of surface tension and coexisting properties. We show that the inclusion of tail corrections during the course of the Monte Carlo simulation impacts the coexisting and the interfacial properties. We establish that the long range corrections to the surface tension are the same order of magnitude as those obtained from planar interface. We show that the slab-based tail method does not amend the localization of the Gibbs equimolar dividing surface. Additionally, a non-monotonic behavior of surface tension is exhibited as a function of the radius of the equimolar dividing surface.
NASA Astrophysics Data System (ADS)
Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.
2004-04-01
A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.
Force field development with GOMC, a fast new Monte Carlo molecular simulation code
NASA Astrophysics Data System (ADS)
Mick, Jason Richard
In this work GOMC (GPU Optimized Monte Carlo) a new fast, flexible, and free molecular Monte Carlo code for the simulation atomistic chemical systems is presented. The results of a large Lennard-Jonesium simulation in the Gibbs ensemble is presented. Force fields developed using the code are also presented. To fit the models a quantitative fitting process is outlined using a scoring function and heat maps. The presented n-6 force fields include force fields for noble gases and branched alkanes. These force fields are shown to be the most accurate LJ or n-6 force fields to date for these compounds, capable of reproducing pure fluid behavior and binary mixture behavior to a high degree of accuracy.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Numerical simulations of crystal growth in a transdermal drug delivery system
NASA Astrophysics Data System (ADS)
Zeng, Jianming; Jacob, Karl I.; Tikare, Veena
2004-02-01
Grain growth by precipitation and Ostwald ripening in an unstressed matrix of a dissolved crystallizable component was simulated using a kinetic Monte Carlo model. This model was used previously to study Ostwald ripening in the high crystallizable component regime and was shown to correctly simulate solution, diffusion and precipitation. In this study, the same model with modifications was applied to the low crystallizable regime of interest to the transdermal drug delivery system (TDS) community. We demonstrate the model's utility by simulating precipitation and grain growth during isothermal storage at different supersaturation conditions. The simulation results provide a first approximation for the crystallization occurring in TDS. It has been reported that for relatively higher temperature growth of drug crystals in TDS occurs only in the middle third of the polymer layer. The results from the simulations support these findings that crystal growth is limited to the middle third of the region, where the availability of crystallizable components is the highest, for cluster growth at relatively high temperature.
Realistic finite temperature simulations of magnetic systems using quantum statistics
NASA Astrophysics Data System (ADS)
Bergqvist, Lars; Bergman, Anders
2018-01-01
We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures, and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe3.