Quantum Monte Carlo Simulations of Correlated-Electron Models
NASA Astrophysics Data System (ADS)
Zhang, Shiwei
1996-05-01
We briefly review quantum Monte Carlo simulation methods for strongly correlated fermion systems and the well-known ``sign'' problem that plagues these methods. We then discuss recent efforts to overcome the problem in the context of simulations of lattice models of electron correlations. In particular, we describe a new algorithm^1, called the constrained path Monte Carlo (CPMC), for studying ground-state (T=0K) properties. It has the form of a random walk in a space of mean-field solutions (Slater determinants); the exponential decay of ``sign'' or signal-to-noise ratio is eliminated by constraining the paths of the random walk according to a known trial wave function. Applications of this algorithm to the Hubbard model have enabled accurate and systematic studies of correlation functions, including s- and d-wave pairings, and hence the long-standing problem of the model's relevance to superconductivity. The method is directly applicable to a variety of other models important to understand high-Tc superconductors and heavy-fermion compounds. In addition, it is expected to be useful to simulations of nuclei, atoms, molecules, and solids. We also comment on possible extensions of the algorithm to finite-temperature calculations. Work supported in part by the Department of Energy's High Performance Computing and Communication Program at Los Alamos National Laboratory, and at OSU by DOE-Basic Energy Sciences, Division of Materials Sciences. ^1 Shiwei Zhang, J. Carlson, and J. E. Gubernatis, Phys. Rev. Lett. 74, 3652 (1995).
Modeling and Computer Simulation: Molecular Dynamics and Kinetic Monte Carlo
Wirth, B.D.; Caturla, M.J.; Diaz de la Rubia, T.
2000-10-10
Recent years have witnessed tremendous advances in the realistic multiscale simulation of complex physical phenomena, such as irradiation and aging effects of materials, made possible by the enormous progress achieved in computational physics for calculating reliable, yet tractable interatomic potentials and the vast improvements in computational power and parallel computing. As a result, computational materials science is emerging as an important complement to theory and experiment to provide fundamental materials science insight. This article describes the atomistic modeling techniques of molecular dynamics (MD) and kinetic Monte Carlo (KMC), and an example of their application to radiation damage production and accumulation in metals. It is important to note at the outset that the primary objective of atomistic computer simulation should be obtaining physical insight into atomic-level processes. Classical molecular dynamics is a powerful method for obtaining insight about the dynamics of physical processes that occur on relatively short time scales. Current computational capability allows treatment of atomic systems containing as many as 10{sup 9} atoms for times on the order of 100 ns (10{sup -7}s). The main limitation of classical MD simulation is the relatively short times accessible. Kinetic Monte Carlo provides the ability to reach macroscopic times by modeling diffusional processes and time-scales rather than individual atomic vibrations. Coupling MD and KMC has developed into a powerful, multiscale tool for the simulation of radiation damage in metals.
Applications for a fast Monte Carlo model for Lidar simulations
NASA Astrophysics Data System (ADS)
Buras, R.; Mayer, B.
2009-04-01
Lidars have the means to probe a multitude of components of the atmosphere with fairly exact spacial precision. However, in order to correctly retrieve atmospheric observables it is necessary to take into account geometrical effects as well as the contribution of multiply scattered photons. Thus retrieval algorithms need thorough validation by an exact model. In particular, physical or geometrical effects not taken into account by, or approximated in the retrieval algorithm must be proven to be unimportant, or correctly approximated. To this end I present a fast yet exact Lidar simulator based on the Monte Carlo method. The simulator is part of the Monte Carlo solver MYSTIC contained in the libRadtran software package. The Lidar simulator can be applied to several types of Lidars, such as HSRL (e.g. EarthCare), trace gas detectors (e.g. A-Scope), and wide angle Lidars (e.g. WAIL) for space- and air-borne Lidars as well as ground Lidars.
Modelling laser light propagation in thermoplastics using Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Parkinson, Alexander
Laser welding has great potential as a fast, non-contact joining method for thermoplastic parts. In the laser transmission welding of thermoplastics, light passes through a semi-transparent part to reach the weld interface. There, it is absorbed as heat, which causes melting and subsequent welding. The distribution and quantity of light reaching the interface are important for predicting the quality of a weld, but are experimentally difficult to estimate. A model for simulating the path of this laser light through these light-scattering plastic parts has been developed. The technique uses a Monte-Carlo approach to generate photon paths through the material, accounting for absorption, scattering and reflection between boundaries in the transparent polymer. It was assumed that any light escaping the bottom surface contributed to welding. The photon paths are then scaled according to the input beam profile in order to simulate non-Gaussian beam profiles. A method for determining the 3 independent optical parameters to accurately predict transmission and beam power distribution at the interface was established using experimental data for polycarbonate at 4 different glass fibre concentrations and polyamide-6 reinforced with 20% long glass fibres. Exit beam profiles and transmissions predicted by the simulation were found to be in generally good agreement (R2>0.90) with experimental measurements. The simulations allowed the prediction of transmission and power distributions at other thicknesses as well as information on reflection, energy absorption and power distributions at other thicknesses for these materials.
Monte Carlo simulations of lattice models for single polymer systems
Hsu, Hsiao-Ping
2014-10-28
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N∼O(10{sup 4}). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and √(10), we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Monte Carlo simulations of lattice models for single polymer systems
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping
2014-10-01
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N ˜ O(10^4). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and sqrt{10}, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
Monte Carlo simulations of lattice models for single polymer systems.
Hsu, Hsiao-Ping
2014-10-28
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N~O(10(4)). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and √10, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior. PMID:25362337
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Modeling root-reinforcement with a Fiber-Bundle Model and Monte Carlo simulation
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper uses sensitivity analysis and a Fiber-Bundle Model (FBM) to examine assumptions underpinning root-reinforcement models. First, different methods for apportioning load between intact roots were investigated. Second, a Monte Carlo approach was used to simulate plants with heartroot, platero...
Monte Carlo simulation of classical spin models with chaotic billiards
NASA Astrophysics Data System (ADS)
Suzuki, Hideyuki
2013-11-01
It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.
The simulation of radar and coherent backscattering with the Monte Carlo model MYSTIC
NASA Astrophysics Data System (ADS)
Pause, Christian; Buras, Robert; Emde, Claudia; Mayer, Bernhard
2013-05-01
A new method to simulate radar and coherent backscattering within the framework of the 3D Monte Carlo radiative transfer model MYSTIC has been developed. Simulating radar is solved with the help of the already existing lidar simulator. Therefore the larger part of this paper presents a solution to simulate coherent backscattering and shows a comparison to a real case.
Modeling of hysteresis loops by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Nehme, Z.; Labaye, Y.; Sayed Hassan, R.; Yaacoub, N.; Greneche, J. M.
2015-12-01
Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nano)architectures with different anisotropy contributions.
Monte Carlo simulations of a model two-dimensional, two-patch colloidal particles
NASA Astrophysics Data System (ADS)
RŻysko, W.; Sokołowski, S.; Staszewski, T.
2015-08-01
We carried out Monte Carlo simulations of the two-patch colloids in two-dimensions. Similar model investigated theoretically in three-dimensions exhibited a re-entrant phase transition. Our simulations indicate that no re-entrant transition exists and the phase diagram for the system is of a swan-neck type and corresponds solely to the fluid-solid transition.
ERIC Educational Resources Information Center
Kim, Su-Young
2012-01-01
Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…
Monte Carlo simulation of CP sup N minus 1 models
Campostrini, M.; Rossi, P.; Vicari, E. )
1992-09-15
Numerical simulations of two-dimensional CP{sup {ital N}{minus}1} models are performed at {ital N}=2, 10, and 21. The lattice action adopted depends explicitly on the gauge degrees of freedom and shows precocious scaling. Our tests of scaling are the stability of adimensional physical quantities (second moment of the correlation function versus inverse mass gap, magnetic susceptibility versus square correlation length) and rotation invariance. Topological properties of the models are explored by measuring the topological susceptibility and by extracting the Abelian string tension. Several different (local and nonlocal) lattice definitions of topological charge are discussed and compared. The qualitative physical picture derived from the continuum 1/{ital N} expansion is confirmed, and agreement with quantitative 1/{ital N} predictions is satisfactory. Variant (Symanzik-improved) actions are considered in the CP{sup 1}{approx}O(3) case and agreement with universality and previous simulations (when comparable) is found. The simulation algorithm is an efficient picture of over-heat-bath and microcanonical algorithms. The dynamical features and critical exponents of the algorithm are discussed in detail.
Direct simulation Monte Carlo modeling of e-beam metal deposition
Venkattraman, A.; Alexeenko, A. A.
2010-07-15
Three-dimensional direct simulation Monte Carlo (DSMC) method is applied here to model the electron-beam physical vapor deposition of copper thin films. Various molecular models for copper-copper interactions have been considered and a suitable molecular model has been determined based on comparisons of dimensional mass fluxes obtained from simulations and previous experiments. The variable hard sphere model that is determined for atomic copper vapor can be used in DSMC simulations for design and analysis of vacuum deposition systems, allowing for accurate prediction of growth rates, uniformity, and microstructure.
Monte Carlo simulation based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, Rizal; Waris, Abdul; Viridi, Sparisoma
2016-09-01
Nuclear fission has been modeled notoriously using two approaches method, macroscopic and microscopic. This work will propose another approach, where the nucleus is treated as a toy model. The aim is to see the usefulness of particle distribution in fission yield calculation. Inasmuch nucleus is a toy, then the Fission Toy Model (FTM) does not represent real process in nature completely. The fission event in FTM is represented by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. By adopting the nucleon density approximation, the Gaussian distribution is chosen as particle distribution. This distribution function generates random number that randomizes distance between particles and a central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. The yield is determined from portion of nuclei distribution which is proportional with portion of mass numbers. By using modified FTM, characteristic of particle distribution in each fission event could be formed before fission process. These characteristics could be used to make prediction about real nucleons interaction in fission process. The results of FTM calculation give information that the γ value seems as energy.
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
NASA Astrophysics Data System (ADS)
Erdem, Riza; Aydiner, Ekrem
2009-03-01
Voltage-gated ion channels are key molecules for the generation and propagation of electrical signals in excitable cell membranes. The voltage-dependent switching of these channels between conducting and nonconducting states is a major factor in controlling the transmembrane voltage. In this study, a statistical mechanics model of these molecules has been discussed on the basis of a two-dimensional spin model. A new Hamiltonian and a new Monte Carlo simulation algorithm are introduced to simulate such a model. It was shown that the results well match the experimental data obtained from batrachotoxin-modified sodium channels in the squid giant axon using the cut-open axon technique.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
NASA Astrophysics Data System (ADS)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-01
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
NASA Astrophysics Data System (ADS)
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
Eun, Changsun; Das, Jhuma; Berkowitz, Max L
2013-12-12
A lattice model is proposed to explain the restructuring of an ionic surfactant absorbed on a charged surface. When immersed in water, an ionic mica plate initially covered by a monolayer of surfactants rearranges to a surface inhomogeneously covered by patches of surfactant bilayer and bare mica. The model considers four species that can cover lattice sites of a surface. These species include (i) a surfactant molecule with its headgroup down, (ii) surfactant molecule with the headgroup up, (iii) a surfactant dimer arranged in a tail-to-tail configuration, which is a part of a bilayer, and (iv) a mica lattice site covered by water. We consider that only nearest neighbors on the lattice interact and describe the interactions by an interaction matrix. Using this model, we perform Monte Carlo simulations and study how the structure of the inhomogeneous surface depends on the interaction between water covered lattice site and its neighboring surfactant species covered sites. We observe that when this interaction is absent, the system undergoes phase separation into a bilayer phase and mica surface covered with water. When this interaction is taken into account, patches of surfactant bilayer and water are present in our system. The interaction between mica surfaces covered by patches of ionic surfactants is studied in experiments to understand the nature of long-ranged "hydrophobic" forces. PMID:23962357
Multiscale Modelling of UniMolecular FRET Probes Using Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Sanyal, Shourjya; MacKernan, Donal; Coker, David F.
2015-09-01
Föster Resonance Energy Transfer (FRET) based biosensors are widely used in experimental biology to understand spatiotemporal dynamics of protein pairs both in-vivo and in-vitro. In our study we have developed an idealised model consisting of two macroparticles, representing proteins attached to fluorescent chromophores, linked with an idealized flexible linker consisting of N point-like beads joined by harmonic springs. Monte Carlo simulations with these models are used to investigate the influence of flexible linkers on unimolecular FRET based bio-sensors. The results provide qualitative insight into the efficacy of designing flexible peptide linkers.
Ultrafast vectorized multispin coding algorithm for the Monte Carlo simulation of the 3D Ising model
NASA Astrophysics Data System (ADS)
Wansleben, Stephan
1987-02-01
A new Monte Carlo algorithm for the 3D Ising model and its implementation on a CDC CYBER 205 is presented. This approach is applicable to lattices with sizes between 3·3·3 and 192·192·192 with periodic boundary conditions, and is adjustable to various kinetic models. It simulates a canonical ensemble at given temperature generating a new random number for each spin flip. For the Metropolis transition probability the speed is 27 ns per updates on a two-pipe CDC Cyber 205 with 2 million words physical memory, i.e. 1.35 times the cycle time per update or 38 million updates per second.
Arterberry, Martha E.; Bornstein, Marc H.; Haynes, O. Maurice
2012-01-01
Two analytical procedures for identifying young children as categorizers, the Monte Carlo Simulation and the Probability Estimate Model, were compared. Using a sequential touching method, children age 12, 18, 24, and 30 months were given seven object sets representing different levels of categorical classification. From their touching performance, the probability that children were categorizing was then determined independently using Monte Carlo Simulation and the Probability Estimate Model. The two analytical procedures resulted in different percentages of children being classified as categorizers. Results using the Monte Carlo Simulation were more consistent with group-level analyses than results using the Probability Estimate Model. These findings recommend using the Monte Carlo Simulation for determining individual categorizer classification. PMID:21402410
AO modelling for wide-field E-ELT instrumentation using Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Basden, Alastair; Morris, Simon; Morris, Tim; Myers, Richard
2014-08-01
Extensive simulations of AO performance for several E-ELT instruments (including EAGLE, MOSAIC, HIRES and MAORY) have been ongoing using the Monte-Carlo Durham AO Simulation Package. We present the latest simulation results, including studies into DM requirements, dependencies of performance on asterism, detailed point spread function generation, accurate telescope modelling, and studies of laser guide star effects. Details of simulations will be given, including the use of optical models of the E-ELT to generate wave- front sensor pupil illumination functions, laser guide star modelling, and investigations of different many-layer atmospheric profiles. We discuss issues related to ELT-scale simulation, how we have overcome these, and how we will be approaching forthcoming issues such as modelling of advanced wavefront control, multi-rate wavefront sensing, and advanced treatment of extended laser guide star spots. We also present progress made on integrating simulation with AO real-time control systems. The impact of simulation outcomes on instrument design studies will be discussed, and the ongoing work plan presented.
Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.
Experiments with encapsulation of Monte Carlo simulation results in machine learning models
NASA Astrophysics Data System (ADS)
Lal Shrestha, Durga; Kayastha, Nagendra; Solomatine, Dimitri
2010-05-01
Uncertainty analysis techniques based on Monte Carlo (MC) simulation have been applied in hydrological sciences successfully in the last decades. They allow for quantification of the model output uncertainty resulting from uncertain model parameters, input data or model structure. They are very flexible, conceptually simple and straightforward, but become impractical in real time applications for complex models when there is little time to perform the uncertainty analysis because of the large number of model runs required. A number of new methods were developed to improve the efficiency of Monte Carlo methods and still these methods require considerable number of model runs in both offline and operational mode to produce reliable and meaningful uncertainty estimation. This paper presents experiments with machine learning techniques used to encapsulate the results of MC runs. A version of MC simulation method, the generalised likelihood uncertain estimation (GLUE) method, is first used to assess the parameter uncertainty of the conceptual rainfall-runoff model HBV. Then the three machines learning methods, namely artificial neural networks, M5 model trees and locally weighted regression methods are trained to encapsulate the uncertainty estimated by the GLUE method using the historical input data. The trained machine learning models are then employed to predict the uncertainty of the model output for the new input data. This method has been applied to two contrasting catchments: the Brue catchment (United Kingdom) and the Bagamati catchment (Nepal). The experimental results demonstrate that the machine learning methods are reasonably accurate in approximating the uncertainty estimated by GLUE. The great advantage of the proposed method is its efficiency to reproduce the MC based simulation results; it can thus be an effective tool to assess the uncertainty of flood forecasting in real time.
Erdem, Riza; Aydiner, Ekrem
2009-03-01
Voltage-gated ion channels are key molecules for the generation and propagation of electrical signals in excitable cell membranes. The voltage-dependent switching of these channels between conducting and nonconducting states is a major factor in controlling the transmembrane voltage. In this study, a statistical mechanics model of these molecules has been discussed on the basis of a two-dimensional spin model. A new Hamiltonian and a new Monte Carlo simulation algorithm are introduced to simulate such a model. It was shown that the results well match the experimental data obtained from batrachotoxin-modified sodium channels in the squid giant axon using the cut-open axon technique. PMID:19391983
NASA Astrophysics Data System (ADS)
Anagnostopoulos, K.; Azuma, T.; Nishimura, J.
The IKKT or IIB matrix model has been postulated to be a non perturbative definition of superstring theory. It has the attractive feature that spacetime is dynamically generated, which makes possible the scenario of dynamical compactification of extra dimensions, which in the Euclidean model manifests by spontaneously breaking the SO(10) rotational invariance (SSB). In this work we study using Monte Carlo simulations the 6 dimensional version of the Euclidean IIB matrix model. Simulations are found to be plagued by a strong complex action problem and the factorization method is used for effective sampling and computing expectation values of the extent of spacetime in various dimensions. Our results are consistent with calculations using the Gaussian Expansion method which predict SSB to SO(3) symmetric vacua, a finite universal extent of the compactified dimensions and finite spacetime volume.
Monte Carlo simulation of flux lattice melting in a model high- T sub c superconductor
Ryu, S.; Doniach, S.; Deutscher, G.; Kapitulnik, A. School of Physics and Astronomy, Tel Aviv University, Ramat-Aviv 69978 )
1992-02-03
We studied flux lattice melting in a model high-{ital T}{sub {ital c}} superconductor by Monte Carlo simulation in terms of vortex variables. We identify two melting curves in the {ital B}-{ital T} phase diagram and evaluate a density-dependent Lindemann criterion number for melting. We also observe that the transition temperature shifts downward toward the two-dimensional melting limit as the density of flux lines increases. Although the transition temperature does not change, a significant difference in shear modulus is observed when flux cutting or reconnection is allowed.
Fission yield calculation using toy model based on Monte Carlo simulation
Jubaidah; Kurniadi, Rizal
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Eged, Katalin; Kis, Zoltán; Voigt, Gabriele
2006-01-01
After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the "location factor method" and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison. PMID:16095771
A geometrical model for the Monte Carlo simulation of the TrueBeam linac.
Rodriguez, M; Sempau, J; Fogliata, A; Cozzi, L; Sauerwein, W; Brualla, L
2015-06-01
Monte Carlo simulation of linear accelerators (linacs) depends on the accurate geometrical description of the linac head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files of the flattening-filter-free (FFF) beams tallied at a plane located just upstream of the jaws. Yet, Monte Carlo simulations based on third-party tallied phase spaces are subject to limitations. In this work, an experimentally based geometry developed for the simulation of the FFF beams of the Varian TrueBeam linac is presented. The Monte Carlo geometrical model of the TrueBeam linac uses information provided by Varian that reveals large similarities between the TrueBeam machine and the Clinac 2100 downstream of the jaws. Thus, the upper part of the TrueBeam linac was modeled by introducing modifications to the Varian Clinac 2100 linac geometry. The most important of these modifications is the replacement of the standard flattening filters by ad hoc thin filters. These filters were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6 MV and 10 MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements performed with a diode detector for radiation fields ranging from 3 × 3 to 40 × 40 cm(2) at depths of maximum dose of 5 and 10 cm. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were performed for dose profiles obtained from Monte Carlo simulations using the phase-space files distributed by Varian for the TrueBeam linac as the sources of particles. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TrueBeam phase-space files for all fields and depths considered, except for
A geometrical model for the Monte Carlo simulation of the TrueBeam linac
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Sempau, J.; Fogliata, A.; Cozzi, L.; Sauerwein, W.; Brualla, L.
2015-06-01
Monte Carlo simulation of linear accelerators (linacs) depends on the accurate geometrical description of the linac head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files of the flattening-filter-free (FFF) beams tallied at a plane located just upstream of the jaws. Yet, Monte Carlo simulations based on third-party tallied phase spaces are subject to limitations. In this work, an experimentally based geometry developed for the simulation of the FFF beams of the Varian TrueBeam linac is presented. The Monte Carlo geometrical model of the TrueBeam linac uses information provided by Varian that reveals large similarities between the TrueBeam machine and the Clinac 2100 downstream of the jaws. Thus, the upper part of the TrueBeam linac was modeled by introducing modifications to the Varian Clinac 2100 linac geometry. The most important of these modifications is the replacement of the standard flattening filters by ad hoc thin filters. These filters were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6 MV and 10 MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements performed with a diode detector for radiation fields ranging from 3 × 3 to 40 × 40 cm2 at depths of maximum dose of 5 and 10 cm. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were performed for dose profiles obtained from Monte Carlo simulations using the phase-space files distributed by Varian for the TrueBeam linac as the sources of particles. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TrueBeam phase-space files for all fields and depths considered, except for the
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Moulin, F.; Picaud, S.; Hoang, P. N. M.; Jedlovszky, P.
2007-10-01
The grand canonical Monte Carlo method is used to simulate the adsorption isotherms of water molecules on different types of model soot particles. The soot particles are modeled by graphite-type layers arranged in an onionlike structure that contains randomly distributed hydrophilic sites, such as OH and COOH groups. The calculated water adsorption isotherm at 298K exhibits different characteristic shapes depending both on the type and the location of the hydrophilic sites and also on the size of the pores inside the soot particle. The different shapes of the adsorption isotherms result from different ways of water aggregation in or/and around the soot particle. The present results show the very weak influence of the OH sites on the water adsorption process when compared to the COOH sites. The results of these simulations can help in interpreting the experimental isotherms of water adsorbed on aircraft soot.
Monte Carlo Simulations of inter- and intra-grain spin structure of Ising and Heisenberg models
NASA Astrophysics Data System (ADS)
Leblanc, Martin
In order to keep supplying computer hard disk drives with more and more storage space, it is essential to have smaller bits. With smaller bits, superparamagnetism, the spontaneous flipping of the magnetic moments in a bit caused by thermal fluctuations, becomes increasingly important and impacts the stability of stored data. Recording media is composed of magnetic grains (usually made of CoCrPt alloys) roughly 10 nm in size from which bits are composed. Most modeling efforts that study magnetic recording media treat the grains as weakly interacting uniformly magnetized objects. In this work, the spin structure internal to a grain is examined along with the impact of varying the relative strengths of intrar-grain and inter-grain exchange interactions. The interplay between these two effects needs to be examined for a greater understanding of superparamagnetism as well as for the applications of the proposed Heat Assisted Magnetic Recording (HAMR) technology where thermal fluctuations facilitate head-field induced bit reversal in high anisotropy media. Simulations using the Monte Carlo method (with cluster-flipping algorithms) are performed on a 2D single-layer and multilayer Ising model with a strong intrar-grain exchange interaction J as well as a weak inter-grain exchange J'. A strong deviation from traditional behavior is found when J'/J is significant. M-H hysteresis loops are also calculated and the coercivity, H c is estimated. A large value represents a strong resilience to the superparamagnetic effect. It is found that taking into account the internal degrees of freedom has a significant effect on Hce. As the Ising model serves only as an approximation, preliminary simulations are also reported on a more realistic Heisenberg model with uniaxial anisotropy. Key Words: Ising model, Heisenberg model, Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Hobler, Gerhard; Bradley, R. Mark; Urbassek, Herbert M.
2016-05-01
Sigmund's model of spatially resolved sputtering is the underpinning of many models of nanoscale pattern formation induced by ion bombardment. It is based on three assumptions: (i) the number of sputtered atoms is proportional to the nuclear energy deposition (NED) near the surface, (ii) the NED distribution is independent of the orientation and shape of the solid surface and is identical to the one in an infinite medium, and (iii) the NED distribution in an infinite medium can be approximated by a Gaussian. We test the validity of these assumptions using Monte Carlo simulations of He, Ar, and Xe impacts on Si at energies of 2, 20, and 200 keV with incidence angles from perpendicular to grazing. We find that for the more commonly-employed beam parameters (Ar and Xe ions at 2 and 20 keV and nongrazing incidence), the Sigmund model's predictions are within a factor of 2 of the Monte Carlo results for the total sputter yield and the first two moments of the spatially resolved sputter yield. This is partly due to a compensation of errors introduced by assumptions (i) and (ii). The Sigmund model, however, does not describe the skewness of the spatially resolved sputter yield, which is almost always significant. The approximation is much poorer for He ions and/or high energies (200 keV). All three of Sigmund's assumptions break down at grazing incidence angles. In all cases, we discuss the origin of the deviations from Sigmund's model.
Iterative optimisation of Monte Carlo detector models using measurements and simulations
NASA Astrophysics Data System (ADS)
Marzocchi, O.; Leone, D.
2015-04-01
This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the "try and fail" approach typical of the prior techniques.
Monte-Carlo simulations of a coarse-grained model for α-oligothiophenes
NASA Astrophysics Data System (ADS)
Almutairi, Amani; Luettmer-Strathmann, Jutta
The interfacial layer of an organic semiconductor in contact with a metal electrode has important effects on the performance of thin-film devices. However, the structure of this layer is not easy to model. Oligothiophenes are small, π-conjugated molecules with applications in organic electronics that also serve as small-molecule models for polythiophenes. α-hexithiophene (6T) is a six-ring molecule, whose adsorption on noble metal surfaces has been studied extensively (see, e.g., Ref.). In this work, we develop a coarse-grained model for α-oligothiophenes. We describe the molecules as linear chains of bonded, discotic particles with Gay-Berne potential interactions between non-bonded ellipsoids. We perform Monte Carlo simulations to study the structure of isolated and adsorbed molecules
NASA Astrophysics Data System (ADS)
Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard
2016-03-01
Laser speckle imaging (LSI) enables measurement of relative blood flow in microvasculature and perfusion in tissues. To determine the impact of tissue optical properties and perfusion dynamics on speckle contrast, we developed a computational simulation of laser speckle contrast imaging. We used a discrete absorption-weighted Monte Carlo simulation to model the transport of light in tissue. We simulated optical excitation of a uniform flat light source and tracked the momentum transfer of photons as they propagated through a simulated tissue geometry. With knowledge of the probability distribution of momentum transfer occurring in various layers of the tissue, we calculated the expected laser speckle contrast arising with coherent excitation using both reflectance and transmission geometries. We simulated light transport in a single homogeneous tissue while independently varying either absorption (.001-100mm^-1), reduced scattering (.1-10mm^-1), or anisotropy (0.05-0.99) over a range of values relevant to blood and commonly imaged tissues. We observed that contrast decreased by 49% with an increase in optical scattering, and observed a 130% increase with absorption (exposure time = 1ms). We also explored how speckle contrast was affected by the depth (0-1mm) and flow speed (0-10mm/s) of a dynamic vascular inclusion. This model of speckle contrast is important to increase our understanding of how parameters such as perfusion dynamics, vessel depth, and tissue optical properties affect laser speckle imaging.
NASA Astrophysics Data System (ADS)
Bubnis, Gregory J.
Since their discovery 25 years ago, carbon fullerenes have been widely studied for their unique physicochemical properties and for applications including organic electronics and photovoltaics. For these applications it is highly desirable for crystalline fullerene thin films to spontaneously self-assemble on surfaces. Accordingly, many studies have functionalized fullerenes with the aim of tailoring their intermolecular interactions and controlling interactions with the solid substrate. The success of these rational design approaches hinges on the subtle interplay of intermolecular forces and molecule-substrate interactions. Molecular modeling is well-suited to studying these interactions by directly simulating self-assembly. In this work, we consider three different fullerene functionalization approaches and for each approach we carry out Monte Carlo simulations of the self-assembly process. In all cases, we use a "coarse-grained" molecular representation that preserves the dominant physical interactions between molecules and maximizes computational efficiency. The first approach we consider is the traditional gold-thiolate SAM (self-assembled monolayer) strategy which tethers molecules to a gold substrate via covalent sulfur-gold bonds. For this we study an asymmetric fullerene thiolate bridged by a phenyl group. Clusters of 40 molecules are simulated on the Au(111) substrate at different temperatures and surface coverage densities. Fullerenes and S atoms are found to compete for Au(111) surface sites, and this competition prevents self-assembly of highly ordered monolayers. Next, we investigate self-assembled monolayers formed by fullerenes with hydrogen-bonding carboxylic acid substituents. We consider five molecules with different dimensions and symmetries. Monte Carlo cooling simulations are used to find the most stable solid structures of clusters adsorbed to Au(111). The results show cases where fullerene-Au(111) attraction, fullerene close-packing, and
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Astrophysics Data System (ADS)
Hsu, Andrew T.
1992-02-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
NASA Astrophysics Data System (ADS)
Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa
We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.
NASA Astrophysics Data System (ADS)
Matsumoto, T.
2007-09-01
Monte Carlo simulations are performed to evaluate depth-dose distributions for possible treatment of cancers by boron neutron capture therapy (BNCT). The ICRU computational model of ADAM & EVA was used as a phantom to simulate tumors at a depth of 5 cm in central regions of the lungs, liver and pancreas. Tumors of the prostate and osteosarcoma were also centered at the depth of 4.5 and 2.5 cm in the phantom models. The epithermal neutron beam from a research reactor was the primary neutron source for the MCNP calculation of the depth-dose distributions in those cancer models. For brain tumor irradiations, the whole-body dose was also evaluated. The MCNP simulations suggested that a lethal dose of 50 Gy to the tumors can be achieved without reaching the tolerance dose of 25 Gy to normal tissue. The whole-body phantom calculations also showed that the BNCT could be applied for brain tumors without significant damage to whole-body organs.
3-D Direct Simulation Monte Carlo modeling of comet 67P/Churyumov-Gerasimenko
NASA Astrophysics Data System (ADS)
Liao, Y.; Su, C.; Finklenburg, S.; Rubin, M.; Ip, W.; Keller, H.; Knollenberg, J.; Kührt, E.; Lai, I.; Skorov, Y.; Thomas, N.; Wu, J.; Chen, Y.
2014-07-01
After deep-space hibernation, ESA's Rosetta spacecraft has been successfully woken up and obtained the first images of comet 67P /Churyumov-Gerasimenko (C-G) in March 2014. It is expected that Rosetta will rendezvous with comet 67P and start to observe the nucleus and coma of the comet in the middle of 2014. As the comet approaches the Sun, a significant increase in activity is expected. Our aim is to understand the physical processes in the coma with the help of modeling in order to interpret the resulting measurements and establish observational and data analysis strategies. DSMC (Direct Simulation Monte Carlo) [1] is a very powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow [2,3]. Comparisons between DSMC and fluid techniques have also been performed to establish the limits of these techniques [2,4]. The drawback with 3D DSMC is that it is computationally highly intensive and thus time consuming. However, the performance can be dramatically increased with parallel computing on Graphic Processor Units (GPUs) [5]. We have already studied a case with comet 9P/Tempel 1 where the Deep Impact observations were used to define the shape of the nucleus and the outflow was simulated with the DSMC approach [6,7]. For comet 67P, we intend to determine the gas flow field in the innermost coma and the surface outgassing properties from analyses of the flow field, to investigate dust acceleration by gas drag, and to compare with observations (including time variability). The boundary conditions are implemented with a nucleus shape model [8] and thermal models which are based on the surface heat-balance equation. Several different parameter sets have been investigated. The calculations have been performed using the PDSC^{++} (Parallel Direct Simulation Monte Carlo) code [9] developed by Wu and his coworkers [10-12]. Simulation tasks can be accomplished within 24
Experimental validation of a direct simulation by Monte Carlo molecular gas flow model
Shufflebotham, P.K.; Bartel, T.J.; Berney, B.
1995-07-01
The Sandia direct simulation Monte Carlo (DSMC) molecular/transition gas flow simulation code has significant potential as a computer-aided design tool for the design of vacuum systems in low pressure plasma processing equipment. The purpose of this work was to verify the accuracy of this code through direct comparison to experiment. To test the DSMC model, a fully instrumented, axisymmetric vacuum test cell was constructed, and spatially resolved pressure measurements made in N{sub 2} at flows from 50 to 500 sccm. In a ``blind`` test, the DSMC code was used to model the experimental conditions directly, and the results compared to the measurements. It was found that the model predicted all the experimental findings to a high degree of accuracy. Only one modeling issue was uncovered. The axisymmetric model showed localized low pressure spots along the axis next to surfaces. Although this artifact did not significantly alter the accuracy of the results, it did add noise to the axial data. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
Modeling the tight focusing of beams in absorbing media with Monte Carlo simulations.
Brandes, Arnd R; Elmaklizi, Ahmed; Akarçay, H Günhan; Kienle, Alwin
2014-01-01
A severe drawback to the scalar Monte Carlo (MC) method is the difficulty of introducing diffraction when simulating light propagation. This hinders, for instance, the accurate modeling of beams focused through microscope objectives, where the diffraction patterns in the focal plane are of great importance in various applications. Here, we propose to overcome this issue by means of a direct extinction method. In the MC simulations, the photon paths' initial positions are sampled from probability distributions which are calculated with a modified angular spectrum of the plane waves technique. We restricted our study to the two-dimensional case, and investigated the feasibility of our approach for absorbing yet nonscattering materials. We simulated the focusing of collimated beams with uniform profiles through microscope objectives. Our results were compared with those yielded by independent simulations using the finite-difference time-domain method. Very good agreement was achieved between the results of both methods, not only for the power distributions around the focal region including diffraction patterns, but also for the distribution of the energy flow (Poynting vector). PMID:25393966
NASA Astrophysics Data System (ADS)
Usui, Satoshi; Koibuchi, Hiroshi
2016-02-01
We study the first order phase transition of the fixed-connectivity triangulated surface model using the Parallel Tempering Monte Carlo (PTMC) technique on relatively large lattices. From the PTMC results, we find that the transition is considerably stronger than the reported ones predicted by the conventional Metropolis MC (MMC) technique and the flat histogram MC technique. We also confirm that the results of the PTMC on relatively smaller lattices are in good agreement with those known results. This implies that the PTMC is successfully used to simulate the first order phase transitions. The parallel computation in the PTMC is implemented by OpenMP, where the speed of the PTMC on multi-core CPUs is considerably faster than that on the single-core CPUs.
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
NASA Astrophysics Data System (ADS)
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Kadoura, Ahmad; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim
2016-06-01
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6. PMID:27276951
Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Siripatana, Adil; Sun, Shuyu; Knio, Omar; Hoteit, Ibrahim
2016-06-01
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ɛ, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.
3D Direct Simulation Monte Carlo Modeling of the Spacecraft Environment of Rosetta
NASA Astrophysics Data System (ADS)
Bieler, A. M.; Tenishev, V.; Fougere, N.; Gombosi, T. I.; Hansen, K. C.; Combi, M. R.; Huang, Z.; Jia, X.; Toth, G.; Altwegg, K.; Wurz, P.; Jäckel, A.; Le Roy, L.; Gasc, S.; Calmonte, U.; Rubin, M.; Tzou, C. Y.; Hässig, M.; Fuselier, S.; De Keyser, J.; Berthelier, J. J.; Mall, U. A.; Rème, H.; Fiethe, B.; Balsiger, H.
2014-12-01
The European Space Agency's Rosetta mission is the first to escort a comet over an extended time as the comet makes its way through the inner solar system. The ROSINA instrument suite consisting of a double focusing mass spectrometer, a time of flight mass spectrometer and a pressure sensor, will provide temporally and spatially resolved data on the comet's volatile inventory. The effect of spacecraft outgassing is well known and has been measured with the ROSINA instruments onboard Rosetta throughout the cruise phase. The flux of released neutral gas originating from the spacecraft cannot be distinguished from the cometary signal by the mass spectrometers and varies significantly with solar illumination conditions. For accurate interpretation of the instrument data, a good understanding of spacecraft outgassing is necessary. In this talk we present results simulating the spacecraft environment with the Adaptive Mesh Particle Simulator (AMPS) code. AMPS is a direct simulation monte carlo code that includes multiple species in a 3D adaptive mesh to describe a full scale model of the spacecraft environment. We use the triangulated surface model of the spacecraft to implement realistic outgassing rates for different areas on the surface and take shadowing effects in consideration. The resulting particle fluxes are compared to the measurements of the ROSINA experiment and implications for ROSINA measurements and data analysis are discussed. Spacecraft outgassing has implications for future space missions to rarefied atmospheres as it imposes a limit on the detection of various species.
a Test Particle Model for Monte Carlo Simulation of Plasma Transport Driven by Quasineutrality
NASA Astrophysics Data System (ADS)
Kuhl, Nelson M.
1995-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. We perform numerical experiments to validate a mathematical model of P. R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The simulations are made using a transport code written by O. Betancourt and M. Taylor, with changes to incorporate our case studies. We adopt a test particle model naturally suggested by the problem of tracking particles in plasma physics. The statistics due to collisions are modeled by a drift kinetic equation whose numerical solution is based on the Monte Carlo method of A. Boozer and G. Kuo -Petravic. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian. It is shown that details of the collision operator other than its dependence on the collision frequency and temperature matter little for transport, and the role of conservation of momentum is investigated. Exponential decay makes it possible to find the confinement times of both ions and electrons by high performance computing. Three -dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. We make a convergence study of the method, derive scaling laws that are in good agreement with predictions from experimental data, and present a comparison with the JET experiment.
Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections. PMID:25527935
Parsons, Neal Levin, Deborah A.; Duin, Adri C. T. van; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N{sub 2}({sup 1}Σ{sub g}{sup +})-N{sub 2}({sup 1}Σ{sub g}{sup +}) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
Monte Carlo computer simulations of Venus equilibrium and global resurfacing models
NASA Technical Reports Server (NTRS)
Dawson, D. D.; Strom, R. G.; Schaber, G. G.
1992-01-01
Two models have been proposed for the resurfacing history of Venus: (1) equilibrium resurfacing and (2) global resurfacing. The equilibrium model consists of two cases: in case 1, areas less than or equal to 0.03 percent of the planet are spatially randomly resurfaced at intervals of less than or greater than 150,000 yr to produce the observed spatially random distribution of impact craters and average surface age of about 500 m.y.; and in case 2, areas greater than or equal to 10 percent of the planet are resurfaced at intervals of greater than or equal to 50 m.y. The global resurfacing model proposes that the entire planet was resurfaced about 500 m.y. ago, destroying the preexisting crater population and followed by significantly reduced volcanism and tectonism. The present crater population has accumulated since then with only 4 percent of the observed craters having been embayed by more recent lavas. To test the equilibrium resurfacing model we have run several Monte Carlo computer simulations for the two proposed cases. It is shown that the equilibrium resurfacing model is not a valid model for an explanation of the observed crater population characteristics or Venus' resurfacing history. The global resurfacing model is the most likely explanation for the characteristics of Venus' cratering record. The amount of resurfacing since that event, some 500 m.y. ago, can be estimated by a different type of Monte Carolo simulation. To date, our initial simulation has only considered the easiest case to implement. In this case, the volcanic events are randomly distributed across the entire planet and, therefore, contrary to observation, the flooded craters are also randomly distributed across the planet.
A virtual source model for Monte Carlo simulation of helical tomotherapy.
Yuan, Jiankui; Rong, Yi; Chen, Quan
2015-01-01
The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent
NASA Astrophysics Data System (ADS)
Vrugt, Jasper A.; Ter Braak, Cajo J. F.; Clark, Martyn P.; Hyman, James M.; Robinson, Bruce A.
2008-12-01
There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled differential evolution adaptive Metropolis (DREAM), that is especially designed to efficiently estimate the posterior probability density function of hydrologic model parameters in complex, high-dimensional sampling problems. This MCMC scheme adaptively updates the scale and orientation of the proposal distribution during sampling and maintains detailed balance and ergodicity. It is then demonstrated how DREAM can be used to analyze forcing data error during watershed model calibration using a five-parameter rainfall-runoff model with streamflow data from two different catchments. Explicit treatment of precipitation error during hydrologic model calibration not only results in prediction uncertainty bounds that are more appropriate but also significantly alters the posterior distribution of the watershed model parameters. This has significant implications for regionalization studies. The approach also provides important new ways to estimate areal average watershed precipitation, information that is of utmost importance for testing hydrologic theory, diagnosing structural errors in models, and appropriately benchmarking rainfall measurement devices.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at lowmore » temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.« less
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong Zhao, Xin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-03-07
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (∼2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. A small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. The results from our Monte Carlo simulations are consistent with available experimental results.
Cluster expansion modeling and Monte Carlo simulation of alnico 5–7 permanent magnets
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai -Zhuang; Ho, Kai -Ming
2015-03-05
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5–7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5–7 at atomistic and nano scales. The alnico 5–7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (~2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. In addition, a small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5–7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. Furthermore, the results from our Monte Carlo simulations are consistent with available experimental results.
Cluster expansion modeling and Monte Carlo simulation of alnico 5-7 permanent magnets
NASA Astrophysics Data System (ADS)
Nguyen, Manh Cuong; Zhao, Xin; Wang, Cai-Zhuang; Ho, Kai-Ming
2015-03-01
The concerns about the supply and resource of rare earth (RE) metals have generated a lot of interests in searching for high performance RE-free permanent magnets. Alnico alloys are traditional non-RE permanent magnets and have received much attention recently due their good performance at high temperature. In this paper, we develop an accurate and efficient cluster expansion energy model for alnico 5-7. Monte Carlo simulations using the cluster expansion method are performed to investigate the structure of alnico 5-7 at atomistic and nano scales. The alnico 5-7 master alloy is found to decompose into FeCo-rich and NiAl-rich phases at low temperature. The boundary between these two phases is quite sharp (˜2 nm) for a wide range of temperature. The compositions of the main constituents in these two phases become higher when the temperature gets lower. Both FeCo-rich and NiAl-rich phases are in B2 ordering with Fe and Al on α-site and Ni and Co on β-site. The degree of order of the NiAl-rich phase is much higher than that of the FeCo-rich phase. A small magnetic moment is also observed in NiAl-rich phase but the moment reduces as the temperature is lowered, implying that the magnetic properties of alnico 5-7 could be improved by lowering annealing temperature to diminish the magnetism in NiAl-rich phase. The results from our Monte Carlo simulations are consistent with available experimental results.
Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations
Douglass, Michael; Bezak, Eva; Penfold, Scott
2012-06-15
Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Verhaegen, F; Liu, H H
2001-02-01
In radiation therapy, new treatment modalities employing dynamic collimation and intensity modulation increase the complexity of dose calculation because a new dimension, time, has to be incorporated into the traditional three-dimensional problem. In this work, we investigated two classes of sampling technique to incorporate dynamic collimator motion in Monte Carlo simulation. The methods were initially evaluated for modelling enhanced dynamic wedges (EDWs) from Varian accelerators (Varian Medical Systems, Palo Alto, USA). In the position-probability-sampling or PPS method, a cumulative probability distribution function (CPDF) was computed for the collimator position, which could then be sampled during simulations. In the static-component-simulation or SCS method, a dynamic field is approximated by multiple static fields in a step-shoot fashion. The weights of the particles or the number of particles simulated for each component field are computed from the probability distribution function (PDF) of the collimator position. The CPDF and PDF were computed from the segmented treatment tables (STTs) for the EDWs. An output correction factor had to be applied in this calculation to account for the backscattered radiation affecting monitor chamber readings. Comparison of the phase-space data from the PPS method (with the step-shoot motion) with those from the SCS method showed excellent agreement. The accuracy of the PPS method was further verified from the agreement between the measured and calculated dose distributions. Compared to the SCS method, the PPS method is more automated and efficient from an operational point of view. The principle of the PPS method can be extended to simulate other dynamic motions, and in particular, intensity-modulated beams using multileaf collimators. PMID:11229715
NASA Astrophysics Data System (ADS)
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75–2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail
2013-12-14
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.
Antonelli, Maria-Rosaria; Pierangelo, Angelo; Novikova, Tatiana; Validire, Pierre; Benali, Abdelali; Gayet, Brice; De Martino, Antonello
2011-01-01
Polarimetric imaging is emerging as a viable technique for tumor detection and staging. As a preliminary step towards a thorough understanding of the observed contrasts, we present a set of numerical Monte Carlo simulations of the polarimetric response of multilayer structures representing colon samples in the backscattering geometry. In a first instance, a typical colon sample was modeled as one or two scattering “slabs” with monodisperse non absorbing scatterers representing the most superficial tissue layers (the mucosa and submucosa), above a totally depolarizing Lambertian lumping the contributions of the deeper layers (muscularis and pericolic tissue). The model parameters were the number of layers, their thicknesses and morphology, the sizes and concentrations of the scatterers, the optical index contrast between the scatterers and the surrounding medium, and the Lambertian albedo. With quite similar results for single and double layer structures, this model does not reproduce the experimentally observed stability of the relative magnitudes of the depolarizing powers for incident linear and circular polarizations. This issue was solved by considering bimodal populations including large and small scatterers in a single layer above the Lambertian, a result which shows the importance of taking into account the various types of scatterers (nuclei, collagen fibers and organelles) in the same model. PMID:21750762
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
Monte Carlo simulation of x-ray scatter based on patient model from digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Liu, Bob; Wu, Tao; Moore, Richard H.; Kopans, Daniel B.
2006-03-01
We are developing a breast specific scatter correction method for digital beast tomosynthesis (DBT). The 3D breast volume was initially reconstructed from 15 projection images acquired from a GE prototype tomosynthesis system without correction of scatter. The voxel values were mapped to the tissue compositions using various segmentation schemes. This voxelized digital breast model was entered into a Monte Carlo package simulating the prototype tomosynthesis system. One billion photons were generated from the x-ray source for each projection in the simulation and images of scattered photons were obtained. A primary only projection image was then produced by subtracting the scatter image from the corresponding original projection image which contains contributions from the both primary photons and scatter photons. The scatter free projection images were then used to reconstruct the 3D breast using the same algorithm. Compared with the uncorrected 3D image, the x-ray attenuation coefficients represented by the scatter-corrected 3D image are closer to those derived from the measurement data.
[Rapid simulation of electrode surface treatment based on Monte-Carlo model].
Hu, Zhengtian; Xu, Ying; Guo, Miao; Sun, Zhitong; Li, Yan
2014-12-01
Micro- and integrated biosensor provides a powerful means for cell electrophysiology research. The technology of electroplating platinum black on the electrode can uprate signal-to-noise ratio and sensitivity of the sensor. For quantifying analysis of the processing method of electroplating process, this paper proposes a grid search algorithm based on the Monte-Carlo model. The paper also puts forward the operational optimization strategy, which can rapidly implement the process of large-scale nanoparticles with different particle size of dispersion (20-200 nm) attac- hing to the electrode and shortening a simulation time from average 20 hours to 0.5 hour when the test number is 10 and electrode radius is 100 microm. When the nanoparticle was in a single layer or multiple layers, the treatment uniformity and attachment rate was analyzed by using the grid search algorithm with different sizes and shapes of electrode. Simulation results showed that under ideal conditions, when the electrode radius is less than 100 /m, with the electrode size increasing, it has an obvious effect for the effective attachment and the homogeneity of nanoparticle, which is advantageous to the quantitative evaluation of electrode array's repeatability. Under the condition of the same electrode area, the best attachment is on the circular electrode compared to the attachments on the square and rectangular ones. PMID:25868260
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo
2012-11-01
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2010-01-01
In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value. PMID:27386264
Biscay, F; Ghoufi, A; Goujon, F; Lachet, V; Malfreyt, P
2008-11-01
The anisotropic united atoms (AUA4) model has been used for linear and branched alkanes to predict the surface tension as a function of temperature by Monte Carlo simulations. Simulations are carried out for n-alkanes ( n-C5, n-C6, n-C7, and n-C10) and for two branched C7 isomers (2,3-dimethylpentane and 2,4-dimethylpentane). Different operational expressions of the surface tension using both the thermodynamic and the mechanical definitions have been applied. The simulated surface tensions with the AUA4 model are found to be consistent within both definitions and in good agreement with experiments. PMID:18847235
Baek, I-H; Lee, B-Y; Kang, J; Kwon, K-I
2015-04-01
Ondansetron is a potent antiemetic drug that has been commonly used to treat acute and chemotherapy-induced nausea and vomiting (CINV) in dogs. The aim of this study was to perform a pharmacokinetic analysis of ondansetron in dogs following oral administration of a single dose. A single 8-mg oral dose of ondansetron (Zofran(®) ) was administered to beagles (n = 18), and the plasma concentrations of ondansetron were measured by liquid chromatography-tandem mass spectrometry. The data were analyzed by modeling approaches using ADAPT5, and model discrimination was determined by the likelihood-ratio test. The peak plasma concentration (Cmax ) was 11.5 ± 10.0 ng/mL at 1.1 ± 0.8 h. The area under the plasma concentration vs. time curve from time zero to the last measurable concentration was 15.9 ± 14.7 ng·h/mL, and the half-life calculated from the terminal phase was 1.3 ± 0.7 h. The interindividual variability of the pharmacokinetic parameters was high (coefficient of variation > 44.1%), and the one-compartment model described the pharmacokinetics of ondansetron well. The estimated plasma concentration range of the usual empirical dose from the Monte Carlo simulation was 0.1-13.2 ng/mL. These findings will facilitate determination of the optimal dose regimen for dogs with CINV. PMID:25131428
Ikawa, Kazuro; Morikawa, Norifumi; Ikeda, Kayo; Ohge, Hiroki; Sueda, Taijiro
2008-10-01
This study evaluated the pharmacodynamics of biapenem in peritoneal fluid (PF). Biapenem (300 or 600mg) was administered via a 0.5-h infusion to 19 patients before abdominal surgery. Venous blood and PF samples were obtained after 0.5, 1, 2, 3, 4, 5 and 6h. Drug concentration data (108 plasma samples and 105 PF samples) were analysed using population pharmacokinetic modelling. A three-compartment model fits the data, with creatinine clearance (CL(Cr)) as the most significant covariate: CL (L/h)=0.036 x CL(Cr)+4.88, V1 (L)=6.95, Q2 (L/h)=2.05, V2 (L)=3.47, Q3 (L/h)=13.7 and V3 (L)=5.91, where CL is the clearance, Q2 and Q3 are the intercompartmental clearances, and V1, V2 and V3 are the volumes of distribution of the central, peripheral and peritoneal compartments, respectively. A Monte Carlo simulation using the pharmacokinetic model showed the probabilities of attaining the bactericidal exposure target (30% of the time above the minimum inhibitory concentration (T>MIC)) in PF were greater than or equal to those in plasma. In the cases of CL(Cr)=90 and 60mL/min, the site-specific pharmacodynamic-derived breakpoints (the highest MIC values at which the probabilities of target attainment in PF were >or=90%) were 2microg/mL for 300mg every 12h, 4microg/mL for biapenem 300mg every 8h (q8h) and 8microg/mL for 600mg q8h. Thus, these results should support the clinical use of biapenem as a treatment for intra-abdominal infections and facilitate the design of the dosing regimen. PMID:18602798
A bone composition model for Monte Carlo x-ray transport simulations
Zhou Hu; Keall, Paul J.; Graves, Edward E.
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
NASA Astrophysics Data System (ADS)
Nourazar, S. S.; Jahangiri, P.; Aboutalebi, A.; Ganjaei, A. A.; Nourazar, M.; Khadem, J.
2011-06-01
The effect of new terms in the improved algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, is investigated by simulating a rarefied binary gas mixture flow inside a rotating cylinder. Dalton law for the partial pressures contributed by each species of the binary gas mixture is incorporated into our simulation using the MDSMC method and the direct simulation Monte-Carlo (DSMC) method. Moreover, the effect of the exponent of the cosine of deflection angle (α) in the inter-molecular collision models, the variable soft sphere (VSS) and the variable hard sphere (VHS), is investigated in our simulation. The improvement of the results of simulation is pronounced using the MDSMC method when compared with the results of the DSMC method. The results of simulation using the VSS model show some improvements on the result of simulation for the mixture temperature at radial distances close to the cylinder wall where the temperature reaches the maximum value when compared with the results using the VHS model.
Spatial Correlations in Monte Carlo Criticality Simulations
NASA Astrophysics Data System (ADS)
Dumonteil, E.; Malvagi, F.; Zoia, A.; Mazzolo, A.; Artusio, D.; Dieudonné, C.; De Mulatier, C.
2014-06-01
Temporal correlations arising in Monte Carlo criticality codes have focused the attention of both developers and practitioners for a long time. Those correlations affects the evaluation of tallies of loosely coupled systems, where the system's typical size is very large compared to the diffusion/absorption length scale of the neutrons. These time correlations are closely related to spatial correlations, both variables being linked by the transport equation. Therefore this paper addresses the question of diagnosing spatial correlations in Monte Carlo criticality simulations. In that aim, we will propose a spatial correlation function well suited to Monte Carlo simulations, and show its use while simulating a fuel pin-cell. The results will be discussed, modeled and interpreted using the tools of branching processes of statistical mechanics. A mechanism called "neutron clustering", affecting simulations, will be discussed in this frame.
NASA Astrophysics Data System (ADS)
Schlijper, A. G.; van Bergen, A. R. D.; Smit, B.
1990-01-01
We present and demonstrate an accurate, reliable, and computationally cheap method for the calculation of free energies in Monte Carlo simulations of lattice models. Even in the critical region it yields good results with comparatively short simulation runs. The method combines upper and lower bounds on the thermodynamic limit entropy density to yield not only an accurate estimate of the free energy but a bound on the possible error as well. The method is demonstrated on the two- and three-dimensional Ising models and the three-dimensional, three-states Potts model.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos
2011-07-15
For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.
Schaefer, C.; Jansen, A. P. J.
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
Schaefer, C; Jansen, A P J
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature. PMID:23406093
NASA Astrophysics Data System (ADS)
Schaefer, C.; Jansen, A. P. J.
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
D. L. Kelly
2007-06-01
Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.
McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan
2013-01-01
Purpose: The purpose of this study is to adapt an equivalent source model originally developed for conventional CT Monte Carlo dose quantification to the radiation oncology context and validate its application for evaluating concomitant dose incurred by a kilovoltage (kV) cone-beam CT (CBCT) system integrated into a linear accelerator. Methods: In order to properly characterize beams from the integrated kV CBCT system, the authors have adapted a previously developed equivalent source model consisting of an equivalent spectrum module that takes into account intrinsic filtration and an equivalent filter module characterizing the added bowtie filtration. An equivalent spectrum was generated for an 80, 100, and 125 kVp beam with beam energy characterized by half-value layer measurements. An equivalent filter description was generated from bowtie profile measurements for both the full- and half-bowtie. Equivalent source models for each combination of equivalent spectrum and filter were incorporated into the Monte Carlo software package MCNPX. Monte Carlo simulations were then validated against in-phantom measurements for both the radiographic and CBCT mode of operation of the kV CBCT system. Radiographic and CBCT imaging dose was measured for a variety of protocols at various locations within a body (32 cm in diameter) and head (16 cm in diameter) CTDI phantom. The in-phantom radiographic and CBCT dose was simulated at all measurement locations and converted to absolute dose using normalization factors calculated from air scan measurements and corresponding simulations. The simulated results were compared with the physical measurements and their discrepancies were assessed quantitatively. Results: Strong agreement was observed between in-phantom simulations and measurements. For the radiographic protocols, simulations uniformly underestimated measurements by 0.54%–5.14% (mean difference = −3.07%, SD = 1.60%). For the CBCT protocols, simulations uniformly
McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan
2013-11-15
Purpose: The purpose of this study is to adapt an equivalent source model originally developed for conventional CT Monte Carlo dose quantification to the radiation oncology context and validate its application for evaluating concomitant dose incurred by a kilovoltage (kV) cone-beam CT (CBCT) system integrated into a linear accelerator.Methods: In order to properly characterize beams from the integrated kV CBCT system, the authors have adapted a previously developed equivalent source model consisting of an equivalent spectrum module that takes into account intrinsic filtration and an equivalent filter module characterizing the added bowtie filtration. An equivalent spectrum was generated for an 80, 100, and 125 kVp beam with beam energy characterized by half-value layer measurements. An equivalent filter description was generated from bowtie profile measurements for both the full- and half-bowtie. Equivalent source models for each combination of equivalent spectrum and filter were incorporated into the Monte Carlo software package MCNPX. Monte Carlo simulations were then validated against in-phantom measurements for both the radiographic and CBCT mode of operation of the kV CBCT system. Radiographic and CBCT imaging dose was measured for a variety of protocols at various locations within a body (32 cm in diameter) and head (16 cm in diameter) CTDI phantom. The in-phantom radiographic and CBCT dose was simulated at all measurement locations and converted to absolute dose using normalization factors calculated from air scan measurements and corresponding simulations. The simulated results were compared with the physical measurements and their discrepancies were assessed quantitatively.Results: Strong agreement was observed between in-phantom simulations and measurements. For the radiographic protocols, simulations uniformly underestimated measurements by 0.54%–5.14% (mean difference =−3.07%, SD = 1.60%). For the CBCT protocols, simulations uniformly underestimated
Buyukada, Musa
2016-09-01
Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process. PMID:27243606
Combining the Monotonic Lagrangian Grid with a Direct Simulation Monte Carlo Model
NASA Astrophysics Data System (ADS)
Cybyk, Bohdan Z.; Oran, Elaine S.; Boris, Jay P.; Anderson, John D.
1995-12-01
Using the monotonic Lagrangian grid (MLG) as a data structure in the direct simulation Monte Carlo (DSMC) methodology produces an approach that automatically adjusts grid resolution to time-varying densities in the flow. The MLG algorithm is an algorithm for tracking and sorting moving particles, and it has a monotonic data structure for indexing and storing the physical attributes of the particles. The DSMC method is a direct particle simulation technique widely used in predicting rarefied flows of dilute gases. Monotonicity features of the MLG ensure that particles close in physical space are stored in adjacent array locations so that particle interactions may be restricted to a "template" of near neighbors. The MLG templates provide a time-varying grid network that automatically adapts to local number densities within the flowfield. Computational advantages and disadvantages of this new implementation are demonstrated by a series of test problems.
Modulated pulse bathymetric lidar Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Luo, Tao; Wang, Yabo; Wang, Rong; Du, Peng; Min, Xia
2015-10-01
A typical modulated pulse bathymetric lidar system is investigated by simulation using a modulated pulse lidar simulation system. In the simulation, the return signal is generated by Monte Carlo method with modulated pulse propagation model and processed by mathematical tools like cross-correlation and digital filter. Computer simulation results incorporating the modulation detection scheme reveal a significant suppression of the water backscattering signal and corresponding target contrast enhancement. More simulation experiments are performed with various modulation and reception variables to investigate the effect of them on the bathymetric system performance.
Fast Lattice Monte Carlo Simulations of Polymers
NASA Astrophysics Data System (ADS)
Wang, Qiang; Zhang, Pengfei
2014-03-01
The recently proposed fast lattice Monte Carlo (FLMC) simulations (with multiple occupancy of lattice sites (MOLS) and Kronecker δ-function interactions) give much faster/better sampling of configuration space than both off-lattice molecular simulations (with pair-potential calculations) and conventional lattice Monte Carlo simulations (with self- and mutual-avoiding walk and nearest-neighbor interactions) of polymers.[1] Quantitative coarse-graining of polymeric systems can also be performed using lattice models with MOLS.[2] Here we use several model systems, including polymer melts, solutions, blends, as well as confined and/or grafted polymers, to demonstrate the great advantages of FLMC simulations in the study of equilibrium properties of polymers.
ERIC Educational Resources Information Center
Wood, Dean Arthur
A special purpose digital computer which utilizes the Monte Carlo integration method of obtaining simulations of chemical processes was developed and constructed. The computer, designated as the Monte Carlo Integration Computer (MCIC), was designed as an instructional model for the illustration of kinetic and equilibrium processes, and was…
NASA Astrophysics Data System (ADS)
Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei
2014-09-01
Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.
Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz
2004-02-01
External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y
Analytical model of the binary multileaf collimator of tomotherapy for Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Sterpin, E.; Salvat, F.; Olivera, G. H.; Vynckier, S.
2008-02-01
Helical Tomotherapy (HT) delivers intensity-modulated radiotherapy by the means of many configurations of the binary multi-leaf collimator (MLC). The aim of the present study was to devise a method, which we call the 'transfer function' (TF) method, to perform the transport of particles through the MLC much faster than the time consuming Monte Carlo (MC) simulation and with no significant loss of accuracy. The TF method consists of calculating, for each photon in the phase-space file, the attenuation factor for each leaf (up to three) that the photon passes, assuming straight propagation through closed leaves, and storing these factors in a modified phase-space file. To account for the transport through the MLC in a given configuration, the weight of a photon is simply multiplied by the attenuation factors of the leaves that are intersected by the photon ray and are closed. The TF method was combined with the PENELOPE MC code, and validated with measurements for the three static field sizes available (40×5, 40×2.5 and 40×1 cm2) and for some MLC patterns. The TF method allows a large reduction in computation time, without introducing appreciable deviations from the result of full MC simulations.
Guérin, Bastein; Fakhri, Georges El
2008-01-01
We have developed and validated a realistic simulation of random coincidences, pixelated block detectors, light sharing among crystal elements and dead-time in 2D and 3D positron emission tomography (PET) imaging based on the SimSET Monte Carlo simulation software. Our simulation was validated by comparison to a Monte Carlo transport code widely used for PET modeling, GATE, and to measurements made on a PET scanner. Methods We have modified the SimSET software to allow independent tracking of single photons in the object and septa while taking advantage of existing voxel based attenuation and activity distributions and validated importance sampling techniques implemented in SimSET. For each single photon interacting in the detector, the energy-weighted average of interaction points was computed, a blurring model applied to account for light sharing and the associated crystal identified. Detector dead-time was modeled in every block as a function of the local single rate using a variance reduction technique. Electronic dead-time was modeled for the whole scanner as a function of the prompt coincidences rate. Energy spectra predicted by our simulation were compared to GATE. NEMA NU-2 2001 performance tests were simulated with the new simulation as well as with SimSET and compared to measurements made on a Discovery ST (DST) camera. Results Errors in simulated spatial resolution (full width at half maximum, FWHM) were 5.5% (6.1%) in 2D (3D) with the new simulation, compared with 42.5% (38.2%) with SimSET. Simulated (measured) scatter fractions were 17.8% (21.3%) in 2D and 45.8% (45.2%) in 3D. Simulated and measured sensitivities agreed within 2.3 % in 2D and 3D for all planes and simulated and acquired count rate curves (including NEC) were within 12.7% in 2D in the [0: 80 kBq/cc] range and in 3D in the [0: 35 kBq/cc] range. The new simulation yielded significantly more realistic singles’ and coincidences’ spectra, spatial resolution, global sensitivity and lesion
Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
NASA Astrophysics Data System (ADS)
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
NASA Astrophysics Data System (ADS)
Zheng, Na; Xu, Hai-Bo
2015-10-01
An empirical numerical model that includes nuclear absorption, multiple Coulomb scattering and energy loss is presented for the calculation of transmission through thick objects in high energy proton radiography. In this numerical model the angular distributions are treated as Gaussians in the laboratory frame. A Monte Carlo program based on the Geant4 toolkit was developed and used for high energy proton radiography experiment simulations and verification of the empirical numerical model. The two models are used to calculate the transmission fraction of carbon and lead step-wedges in proton radiography at 24 GeV/c, and to calculate radial transmission of the French Test Object in proton radiography at 24 GeV/c with different angular cuts. It is shown that the results of the two models agree with each other, and an analysis of the slight differences is given. Supported by NSAF (11176001) and Science and Technology Developing Foundation of China Academy of Engineering Physics (2012A0202006)
Björnham, Oscar; Axner, Ove; Andersson, Magnus
2008-04-01
P pili are fimbrial adhesion organelles expressed by uropathogenic Escherichia coli in the upper urinary tract. They constitute a stiff helix-like polymer consisting of a number of subunits joined by head-to-tail bonds. The elongation and retraction properties of individual P pili exposed to strain have been modeled by Monte Carlo (MC) simulations. The simulation model is based upon a three-state energy landscape that deforms under an applied force. Bond opening and closure are modeled by Bells theory while the elongation of the linearized part of the pilus is described by a worm-like chain model. The simulations are compared with measurements made by force measuring optical tweezers. It was found that the simulations can reproduce pili elongation as well as retraction, under both equilibrium and dynamic conditions, including entropic effects. It is shown that the simulations allow for an assessment of various model parameters, e.g. the unfolding force, energy barrier heights, and various distances in the energy landscape, including their stochastic spread that analytical models are unable to do. The results demonstrate that MC simulations are useful to model elongation and retraction properties of P pili, and therefore presumably also other types of pili, exposed to strain and/or stress. MC simulations are particularly suited for description of helix-like pili since these have an intricate self-regulating mechanical elongation behavior that makes analytical descriptions non-trivial when dynamic processes are studied, or if additional interactions in the rod or the behavior of the adhesion tip needs to be modeled. PMID:17926029
NASA Astrophysics Data System (ADS)
Meng, Lingyi; Wang, Dong; Li, Qikai; Yi, Yuanping; Brédas, Jean-Luc; Shuai, Zhigang
2011-03-01
We describe a new dynamic Monte Carlo model to simulate the operation of a polymer-blend solar cell; this model provides major improvements with respect to the one we developed earlier [J. Phys. Chem. B 114, 36 (2010)] by incorporating the Poisson equation and a charge thermoactivation mechanism. The advantage of the present approach is its capacity to deal with a nonuniform electrostatic potential that dynamically depends on the charge distribution. In this way, the unbalance in electron and hole mobilities and the space-charge induced potential distribution can be treated explicitly. Simulations reproduce well the experimental I-V curve in the dark and the open-circuit voltage under illumination of a polymer-blend solar cell. The dependence of the photovoltaic performance on the difference in electron and hole mobilities is discussed.
On the use of Monte Carlo simulations to model transport of positrons in gases and liquids.
Petrović, Zoran Lj; Marjanović, Srdjan; Dujko, Saša; Banković, Ana; Malović, Gordana; Buckman, Stephen; Garcia, Gustavo; White, Ron; Brunger, Michael
2014-01-01
In this paper we make a parallel between the swarm method in physics of ionized gases and modeling of positrons in radiation therapy and diagnostics. The basic idea is to take advantage of the experience gained in the past with electron swarms and to use it in establishing procedures of modeling positron diagnostics and therapy based on the well-established experimental binary collision data. In doing so we discuss the application of Monte Carlo technique for positrons in the same manner as used previously for electron swarms, we discuss the role of complete cross section sets (complete in terms of number, momentum and energy balance and tested against measured swarm parameters), we discuss the role of benchmarks and how to choose benchmarks for electrons that may perhaps be a subject to experimental verification. Finally we show some samples of positron trajectories together with secondary electrons that were established solely on the basis of accurate binary cross sections and also how those may be used in modeling of both gas filled traps and living organisms. PMID:23466009
NASA Astrophysics Data System (ADS)
Arifin, P.; Goldys, E.; Tansley, T. L.
1995-08-01
We present a method of simulating the electron transport in low-temperature-grown GaAs by the Monte Carlo method. Low-temperature-grown GaAs contains microscopic inclusions of As and these inhomogeneities render impossible the standard Monte Carlo mobility simulations. Our method overcomes this difficulty and allows the quantitative prediction of electron transport on the basis of principal microscopic material parameters, including the impurity and the precipitate concentrations and the precipitate size. The adopted approach involves simulations of a single electron trajectory in real space, while the influence of As precipitates on the GaAs matrix is treated in the framework of a Schottky-barrier model. The validity of this approach is verified by evaluation of the drift velocity in homogeneous GaAs where excellent agreement with other workers' results is reached. The drift velocity as a function of electric field in low-temperature-grown GaAs is calculated for a range of As precipitate concentrations. Effect of compensation ratio on drift velocity characteristics is also investigated. It is found that the drift velocity is reduced and the electric field at which the onset of the negative differential mobility occurs increases as the precipitate concentration increases. Both these effects are related to the reduced electron mean free path in the presence of precipitates. Additionally, comparatively high low-field electron mobilities in this material are theoretically explained.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Al-Subeihi, Ala A A; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; van Bladeren, Peter J; Rietjens, Ivonne M C M; Punt, Ans
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1'-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1'-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1'-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1'-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1'-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1'-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. PMID:25549870
A relativistic Monte Carlo binary collision model for use in plasma particle simulation codes
Procassini, R.J.; Birdsall, C.K.; Morse, E.C.; Cohen, B.I.
1987-05-14
Particle simulations of plasma physics phenomena employ far fewer particles than the systems which are being simulated, owing to the limited speed and memory capacity of even the most powerful supercomputers. If the simulation consists of point particles in a gridless domain, then the combination of the small number of particles in a Debye sphere and the possibility of zero-impact-parameters, large-angle scattering results in a significant enhancement of fluctuation phenomena such as collisions. Collisional processes in a simulation may be difficult because of disparate time scales. A comparison of the relevant physical time scales of the system that is being simulated usually yields a large range of values. For instance, the grid-cell transit time is usually several orders of magnitude smaller than the 90/sup 0/ scattering time. Much of the physical phenomena of interest in the simulation are due to these long-time-scale collisional processes, but short-time-scale processes (such as particle bounce times in a mirror or tokamak) must be adequately resolved if the plasma dielectric response and the plasma potential are to be accurately determined. The following paper outlines the physics and operation of the binary collision model within the electrostatic code and presents the results of computer simulations of velocity space transport which were run to test the accuracy of the model. Also discussed are the timing statistics for the collision package relative to the other major physics packages in the code, as well as recommendations on the frequency of use of the collision package within the simulation sequence.
Ródenas, J; Burgos, M C; Zarza, I; Gallardo, S
2005-01-01
Simulation of detector calibration using the Monte Carlo method is very convenient. The computational calibration procedure using the MCNP code was validated by comparing results of the simulation with laboratory measurements. The standard source used for this validation was a disc-shaped filter where fission and activation products were deposited. Some discrepancies between the MCNP results and laboratory measurements were attributed to the point source model adopted. In this paper, the standard source has been simulated using both point and surface source models. Results from both models are compared with each other as well as with experimental measurements. Two variables, namely, the collimator diameter and detector-source distance have been considered in the comparison analysis. The disc model is seen to be a better model as expected. However, the point source model is good for large collimator diameter and also when the distance from detector to source increases, although for smaller sizes of the collimator and lower distances a surface source model is necessary. PMID:16604596
Monte Carlo simulated coronary angiograms of realistic anatomy and pathology models
NASA Astrophysics Data System (ADS)
Kyprianou, Iacovos S.; Badal, Andreu; Badano, Aldo; Banh, Diemphuc; Freed, Melanie; Myers, Kyle J.; Thompson, Laura
2007-03-01
We have constructed a fourth generation anthropomorphic phantom which, in addition to the realistic description of the human anatomy, includes a coronary artery disease model. A watertight version of the NURBS-based Cardiac-Torso (NCAT) phantom was generated by converting the individual NURBS surfaces of each organ into closed, manifold and non-self-intersecting tessellated surfaces. The resulting 330 surfaces of the phantom organs and tissues are now comprised of ~5×10 6 triangles whose size depends on the individual organ surface normals. A database of the elemental composition of each organ was generated, and material properties such as density and scattering cross-sections were defined using PENELOPE. A 300 μm resolution model of a heart with 55 coronary vessel segments was constructed by fitting smooth triangular meshes to a high resolution cardiac CT scan we have segmented, and was consequently registered inside the torso model. A coronary artery disease model that uses hemodynamic properties such as blood viscosity and resistivity was used to randomly place plaque within the artery tree. To generate x-ray images of the aforementioned phantom, our group has developed an efficient Monte Carlo radiation transport code based on the subroutine package PENELOPE, which employs an octree spatial data-structure that stores and traverses the phantom triangles. X-ray angiography images were generated under realistic imaging conditions (90 kVp, 10° Wanode spectra with 3 mm Al filtration, ~5×10 11 x-ray source photons, and 10% per volume iodine contrast in the coronaries). The images will be used in an optimization algorithm to select the optimal technique parameters for a variety of imaging tasks.
Saloranta, Tuomo M; Armitage, James M; Haario, Heikki; Naes, Kristoffer; Cousins, Ian T; Barton, David N
2008-01-01
Multimedia environmental fate models are useful tools to investigate the long-term impacts of remediation measures designed to alleviate potential ecological and human health concerns in contaminated areas. Estimating and communicating the uncertainties associated with the model simulations is a critical task for demonstrating the transparency and reliability of the results. The Extended Fourier Amplitude Sensitivity Test(Extended FAST) method for sensitivity analysis and Bayesian Markov chain Monte Carlo (MCMC) method for uncertainty analysis and model calibration have several advantages over methods typically applied for multimedia environmental fate models. Most importantly, the simulation results and their uncertainties can be anchored to the available observations and their uncertainties. We apply these techniques for simulating the historical fate of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the Grenland fjords, Norway, and for predicting the effects of different contaminated sediment remediation (capping) scenarios on the future levels of PCDD/Fs in cod and crab therein. The remediation scenario simulations show that a significant remediation effect can first be seen when significant portions of the contaminated sediment areas are cleaned up, and that increase in capping area leads to both earlier achievement of good fjord status and narrower uncertainty in the predicted timing for this. PMID:18350897
Morton, April M; McManamay, Ryan A; Nagle, Nicholas N; Piburn, Jesse O; Stewart, Robert N; Surendran Nair, Sujithkumar
2016-01-01
Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Monte Carlo simulations of two-dimensional Hubbard models with string bond tensor-network states
NASA Astrophysics Data System (ADS)
Song, Jeong-Pil; Wee, Daehyun; Clay, R. T.
2015-03-01
We study charge- and spin-ordered states in the two-dimensional extended Hubbard model on a triangular lattice at 1/3 filling. While the nearest-neighbor Coulomb repulsion V induces charge-ordered states, the competition between on-site U and nearest-neighbor V interactions lead to quantum phase transitions to an antiferromagnetic spin-ordered phase with honeycomb charge order. In order to avoid the fermion sign problem and handle frustrations here we use quantum Monte Carlo methods with the string-bond tensor network ansatz for fermionic systems in two dimensions. We determine the phase boundaries of the several spin- and charge-ordered states and show a phase diagram in the on-site U and the nearest-neighbor V plane. The numerical accuracy of the method is compared with exact diagonalization results in terms of the size of matrices D. We also test the use of lattice symmetries to improve the string-bond ansatz. Work at Mississippi State University was supported by the US Department of Energy grant DE-FG02-06ER46315.
Reliability of Monte Carlo simulations in modeling neutron yields from a shielded fission source
NASA Astrophysics Data System (ADS)
McArthur, Matthew S.; Rees, Lawrence B.; Czirr, J. Bart
2016-08-01
Using the combination of a neutron-sensitive 6Li glass scintillator detector with a neutron-insensitive 7Li glass scintillator detector, we are able to make an accurate measurement of the capture rate of fission neutrons on 6Li. We used this detector with a 252Cf neutron source to measure the effects of both non-borated polyethylene and 5% borated polyethylene shielding on detection rates over a range of shielding thicknesses. Both of these measurements were compared with MCNP calculations to determine how well the calculations reproduced the measurements. When the source is highly shielded, the number of interactions experienced by each neutron prior to arriving at the detector is large, so it is important to compare Monte Carlo modeling with actual experimental measurements. MCNP reproduces the data fairly well, but it does generally underestimate detector efficiency both with and without polyethylene shielding. For non-borated polyethylene it underestimates the measured value by an average of 8%. This increases to an average of 11% for borated polyethylene.
Assessment of high-fidelity collision models in the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Weaver, Andrew B.
Advances in computer technology over the decades has allowed for more complex physics to be modeled in the DSMC method. Beginning with the first paper on DSMC in 1963, 30,000 collision events per hour were simulated using a simple hard sphere model. Today, more than 10 billion collision events can be simulated per hour for the same problem. Many new and more physically realistic collision models such as the Lennard-Jones potential and the forced harmonic oscillator model have been introduced into DSMC. However, the fact that computer resources are more readily available and higher-fidelity models have been developed does not necessitate their usage. It is important to understand how such high-fidelity models affect the output quantities of interest in engineering applications. The effect of elastic and inelastic collision models on compressible Couette flow, ground-state atomic oxygen transport properties, and normal shock waves have therefore been investigated. Recommendations for variable soft sphere and Lennard-Jones model parameters are made based on a critical review of recent ab-initio calculations and experimental measurements of transport properties.
NASA Astrophysics Data System (ADS)
Rapini, M.; Dias, R. A.; Costa, B. V.
2007-01-01
Ultrathin magnetic films can be modeled as an anisotropic Heisenberg model with long-range dipolar interactions. It is believed that the phase diagram presents three phases: An ordered ferromagnetic phase (I), a phase characterized by a change from out-of-plane to in-plane in the magnetization (II), and a high-temperature paramagnetic phase (III). It is claimed that the border lines from phase I to III and II to III are of second order and from I to II is first order. In the present work we have performed a very careful Monte Carlo simulation of the model. Our results strongly support that the line separating phases II and III is of the BKT type.
NASA Astrophysics Data System (ADS)
Makeev, Alexei G.; Kurkina, Elena S.; Kevrekidis, Ioannis G.
2012-06-01
Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Monte Carlo simulation of the enantioseparation process
NASA Astrophysics Data System (ADS)
Bustos, V. A.; Acosta, G.; Gomez, M. R.; Pereyra, V. D.
2012-09-01
By means of Monte Carlo simulation, a study of enantioseparation by capillary electrophoresis has been carried out. A simplified system consisting of two enantiomers S (R) and a selector chiral C, which reacts with the enantiomers to form complexes RC (SC), has been considered. The dependence of Δμ (enantioseparation) with the concentration of chiral selector and with temperature have been analyzed by simulation. The effect of the binding constant and the charge of the complexes are also analyzed. The results are qualitatively satisfactory, despite the simplicity of the model.
Leasing policy and the rate of petroleum development: analysis with a Monte Carlo simulation model
Abbey, D; Bivins, R
1982-03-01
The study has two objectives: first, to consider whether alternative leasing systems are desirable to speed the rate of oil and gas exploration and development in frontier basins; second, to evaluate the Petroleum Activity and Decision Simulation model developed by the US Department of the Interior for economic and land use planning and for policy analysis. Analysis of the model involved structural variation of the geology, exploration, and discovery submodels and also involved a formal sensitivity analysis using the Latin Hypercube Sampling Method. We report the rate of exploration, discovery, and petroleum output under a variety of price, leasing policy, and tax regimes.
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%. PMID:26572554
NASA Astrophysics Data System (ADS)
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Chen, Dongsheng; Zeng, Nan; Wang, Yunfei; He, Honghui; Tuchin, Valery V; Ma, Hui
2016-08-01
We conducted Monte Carlo simulations based on anisotropic sclera-mimicking models to examine the polarization features in Mueller matrix polar decomposition (MMPD) parameters during the refractive index matching process, which is one of the major mechanisms of optical clearing. In a preliminary attempt, by changing the parameters of the models, wavelengths, and detection geometries, we demonstrate how the depolarization coefficient and retardance vary during the refractive index matching process and explain the polarization features using the average value and standard deviation of scattering numbers of the detected photons. We also study the depth-resolved polarization features during the gradual progression of the refractive index matching process. The results above indicate that the refractive index matching process increases the depth of polarization measurements and may lead to higher contrast between tissues of different anisotropies in deeper layers. MMPD-derived polarization parameters can characterize the refractive index matching process qualitatively. PMID:27240298
Weijs, Liesbeth; Roach, Anthony C; Yang, Raymond S H; McDougall, Robin; Lyons, Michael; Housand, Conrad; Tibax, Detlef; Manning, Therese; Chapman, John; Edge, Katelyn; Covaci, Adrian; Blust, Ronny
2014-01-01
Physiologically based pharmacokinetic (PBPK) models for wild animal populations such as marine mammals typically have a high degree of model uncertainty and variability due to the scarcity of information and the embryonic nature of this field. Parameters values used in marine mammals models are usually taken from other mammalian species (e.g. rats or mice) and might not be entirely suitable to properly explain the kinetics of pollutants in marine mammals. Therefore, several parameters for a PBPK model for the bioaccumulation and pharmacokinetics of PCB 153 in long-finned pilot whales were estimated in the present study using the Bayesian approach executed with Markov chain Monte Carlo (MCMC) simulations. This method uses 'prior' information of the parameters, either from the literature or from previous model runs. The advantage is that this method uses such 'prior' parameters to calculate probability distributions to determine 'posterior' values that best explain the field observations. Those field observations or datasets were PCB 153 concentrations in blubber of long-finned pilot whales from Sandy Cape and Stanley, Tasmania, Australia. The model predictions showed an overall decrease in PCB 153 levels in blubber over the lifetime of the pilot whales. All parameters from the Sandy Cape model were updated using the Stanley dataset, except for the concentration of PCB 153 in the milk. The model presented here is a promising and preliminary start to PBPK modeling in long-finned pilot whales that would provide a basis for non-invasive studies in these protected marine mammals. PMID:24080004
NASA Astrophysics Data System (ADS)
Prasanth, P. S.; Kakkassery, Jose K.; Vijayakumar, R.
2012-04-01
A modified phenomenological model is constructed for the simulation of rarefied flows of polyatomic non-polar gas molecules by the direct simulation Monte Carlo (DSMC) method. This variable hard sphere-based model employs a constant rotational collision number, but all its collisions are inelastic in nature and at the same time the correct macroscopic relaxation rate is maintained. In equilibrium conditions, there is equi-partition of energy between the rotational and translational modes and it satisfies the principle of reciprocity or detailed balancing. The present model is applicable for moderate temperatures at which the molecules are in their vibrational ground state. For verification, the model is applied to the DSMC simulations of the translational and rotational energy distributions in nitrogen gas at equilibrium and the results are compared with their corresponding Maxwellian distributions. Next, the Couette flow, the temperature jump and the Rayleigh flow are simulated; the viscosity and thermal conductivity coefficients of nitrogen are numerically estimated and compared with experimentally measured values. The model is further applied to the simulation of the rotational relaxation of nitrogen through low- and high-Mach-number normal shock waves in a novel way. In all cases, the results are found to be in good agreement with theoretically expected and experimentally observed values. It is concluded that the inelastic collision of polyatomic molecules can be predicted well by employing the constructed variable hard sphere (VHS)-based collision model.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Development of Monte Carlo Capability for Orion Parachute Simulations
NASA Technical Reports Server (NTRS)
Moore, James W.
2011-01-01
Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in
Cragnell, Carolina; Durand, Dominique; Cabane, Bernard; Skepö, Marie
2016-06-01
Monte Carlo simulations and coarse-grained modeling have been used to analyze Histatin 5, an unstructured short cationic salivary peptide known to have anticandidical properties. The calculated scattering functions have been compared with intensity curves and the distance distribution function P(r) obtained from small angle X-ray scattering (SAXS), at both high and low salt concentrations. The aim was to achieve a molecular understanding and a physico-chemical insight of the obtained SAXS results and to gain information of the conformational changes of Histatin 5 due to altering salt content, charge distribution, and net charge. From a modeling perspective, the accuracy of the electrostatic interactions are of special interest. The used coarse-grained model was based on the primitive model in which charged hard spheres differing in charge and in size represent the ionic particles, and the solvent only enters the model through its relative permittivity. The Hamiltonian of the model comprises three different contributions: (i) excluded volumes, (ii) electrostatic, and (iii) van der Waals interactions. Even though the model can be considered as gross omitting all atomistic details, a great correspondence is obtained with the experimental results. Proteins 2016; 84:777-791. © 2016 Wiley Periodicals, Inc. PMID:26914439
McCreddin, A; Alam, M S; McNabola, A
2015-01-01
An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies. PMID:25260856
Okamoto, Y
1994-04-01
Monte Carlo simulated annealing is applied to the tertiary structure prediction of a 17-residue synthetic peptide, which is known by experiment to exhibit high helical content at low pH. Two dielectric models are considered: sigmoidal distance-dependent dielectric function and a constant dielectric function (epsilon = 2). Starting from completely random initial conformations, our simulations for both dielectric models at low pH gave many helical conformations. The obtained low-energy conformations are compared with the nuclear Overhauser effect spectroscopy cross-peak data for both main chain and side chains, and it is shown that the results for the sigmoidal dielectric function are in remarkable agreement with the experimental data. The results predict the existence of two disjoint helices around residues 5-9 and 11-16, while nmr experiments imply significant alpha-helix content between residues 5 and 14. Simulations with high pH, on the other hand, hardly gave a helical conformation, which is also in accord with the experiment. These findings indicate that when side chains are charged, electrostatic interactions due to these changes play a major role in the helix stability. Our results are compared with the previous 500 ps molecular dynamics simulations of the same peptide. It is argued that simulated annealing is superior to molecular dynamics in two respects: (1) direct folding of alpha-helix from completely random initial conformations is possible for the former, whereas only unfolding of an alpha-helix can be studied by the latter; (2) while both methods predict high helix content for low pH, the results for high pH agree with experiment (low helix content) only for the former method. PMID:8186363
NASA Astrophysics Data System (ADS)
Sinha, Indrajit; Mukherjee, Ashim K.
2014-03-01
The oxidation of CO on Pt-group metal surfaces has attracted widespread attention since a long time due to its interesting oscillatory kinetics and spatiotemporal behavior. The use of STM in conjunction with other experimental data has confirmed the validity of the surface reconstruction (SR) model under low pressure and the more recent surface oxide (SO) model which is possible under sub-atmospheric pressure conditions [1]. In the SR model the surface is periodically reconstructed below a certain low critical CO-coverage and this reconstruction is lifted above a second, higher critical CO-coverage. Alternatively the SO model proposes periodic switching between a low-reactivity metallic surface and a high-reactivity oxide surface. Here we present an overview of our recent kinetic Monte Carlo (KMC) simulation studies on the oscillatory kinetics of surface catalyzed CO oxidation. Different modifications of the lattice gas Ziff-Gulari-Barshad (ZGB) model have been utilized or proposed for this purpose. First we present the effect of desorption on the ZGB reactive to poisoned irreversible phase transition in the SR model. Next we discuss our recent research on KMC simulation of the SO model. The ZGB framework is utilized to propose a new model incorporating not only the standard Langmuir-Hinshelwood (LH) mechanism, but also introducing the Mars-van Krevelen (MvK) mechanism for the surface oxide phase [5]. Phase diagrams, which are plots between long time averages of various oscillating quantities against the normalized CO pressure, show two or three transitions depending on the CO coverage critical threshold (CT) value beyond which all adsorbed oxygen atoms are converted to surface oxide.
Lee, C; Lin, H; Chao, T; Hsiao, I; Chuang, K
2015-06-15
Purpose: Predicted PET images on the basis of analytical filtering approach for proton range verification has been successful developed and validated using FLUKA Monte Carlo (MC) codes and phantom measurements. The purpose of the study is to validate the effectiveness of analytical filtering model for proton range verification on GATE/GEANT4 Monte Carlo simulation codes. Methods: In this study, we performed two experiments for validation of predicted β+-isotope by the analytical model with GATE/GEANT4 simulations. The first experiments to evaluate the accuracy of predicting β+-yields as a function of irradiated proton energies. In second experiment, we simulate homogeneous phantoms of different materials irradiated by a mono-energetic pencil-like proton beam. The results of filtered β+-yields distributions by the analytical model is compared with those of MC simulated β+-yields in proximal and distal fall-off ranges. Results: The results investigate the distribution between filtered β+-yields and MC simulated β+-yields distribution in different conditions. First, we found that the analytical filtering can be applied over the whole range of the therapeutic energies. Second, the range difference between filtered β+-yields and MC simulated β+-yields at the distal fall-off region are within 1.5mm for all materials used. The findings validated the usefulness of analytical filtering model on range verification of proton therapy on GATE Monte Carlo simulations. In addition, there is a larger discrepancy between filtered prediction and MC simulated β+-yields using GATE code, especially in proximal region. This discrepancy might Result from the absence of wellestablished theoretical models for predicting the nuclear interactions. Conclusion: Despite the fact that large discrepancies of the distributions between MC-simulated and predicted β+-yields were observed, the study prove the effectiveness of analytical filtering model for proton range verification using
Baran, Timothy M.; Foster, Thomas H.
2011-01-01
We present a new Monte Carlo model of cylindrical diffusing fibers that is implemented with a graphics processing unit. Unlike previously published models that approximate the diffuser as a linear array of point sources, this model is based on the construction of these fibers. This allows for accurate determination of fluence distributions and modeling of fluorescence generation and collection. We demonstrate that our model generates fluence profiles similar to a linear array of point sources, but reveals axially heterogeneous fluorescence detection. With axially homogeneous excitation fluence, approximately 90% of detected fluorescence is collected by the proximal third of the diffuser for μs'/μa = 8 in the tissue and 70 to 88% is collected in this region for μs'/μa = 80. Increased fluorescence detection by the distal end of the diffuser relative to the center section is also demonstrated. Validation of these results was performed by creating phantoms consisting of layered fluorescent regions. Diffusers were inserted into these layered phantoms and fluorescence spectra were collected. Fits to these spectra show quantitative agreement between simulated fluorescence collection sensitivities and experimental results. These results will be applicable to the use of diffusers as detectors for dosimetry in interstitial photodynamic therapy. PMID:21895311
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Li, Jun; Calo, Victor M.
2013-09-15
We present a single-particle Lennard–Jones (L-J) model for CO{sub 2} and N{sub 2}. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO{sub 2} and N{sub 2} agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH{sub 4}, CO{sub 2} and N{sub 2} are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available.
Jones, Stuart; Lacey, Peter; Walshe, Terry
2009-04-01
Lake Toolibin, an ephemeral lake in the agricultural zone of Western Australia, is under threat from secondary salinity due to land clearance throughout the catchment. The lake is extensively covered with native vegetation and is a Ramsar listed wetland, being one of the few remaining significant migratory bird habitats in the region. Currently, inflow with salinity greater than 1000 mg/L TDS is diverted from the lake in an effort to protect sensitive lakebed vegetation. However, this conservative threshold compromises the frequency and extent of lake inundation, which is essential for bird breeding. It is speculated that relaxing the threshold to 5000 mg/L may pose negligible additional risk to the condition of lakebed vegetation. To characterise the magnitude of improvement in the provision of bird breeding habitat that might be generated by relaxing the threshold, a dynamic water and salt balance model of the lake was developed and implemented using Monte Carlo simulation. Results from best estimate model inputs indicate that relaxation of the threshold increases the likelihood of satisfying habitat requirements by a factor of 9.7. A second-order Monte Carlo analysis incorporating incertitude generated plausible bounds of [2.6, 37.5] around the best estimate for the relative likelihood of satisfying habitat requirements. Parameter-specific sensitivity analyses suggest the availability of habitat is most sensitive to pan evaporation, lower than expected inflow volume, and higher than expected inflow salt concentration. The characterisation of uncertainty associated with environmental variation and incertitude allows managers to make informed risk-weighted decisions. PMID:19114292
NASA Astrophysics Data System (ADS)
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Zhou, X. W.; Yang, N. Y. C.
2014-03-14
Electronic properties of semiconductor devices are sensitive to defects such as second phase precipitates, grain sizes, and voids. These defects can evolve over time especially under oxidation environments and it is therefore important to understand the resulting aging behavior in order for the reliable applications of devices. In this paper, we propose a kinetic Monte Carlo framework capable of simultaneous simulation of the evolution of second phases, precipitates, grain sizes, and voids in complicated systems involving many species including oxygen. This kinetic Monte Carlo model calculates the energy barriers of various events based directly on the experimental data. As a first step of our model implementation, we incorporate the second phase formation module in the parallel kinetic Monte Carlo codes SPPARKS. Selected aging simulations are performed to examine the formation of second phase precipitates at the eletroplated Au/Bi{sub 2}Te{sub 3} interface under oxygen and oxygen-free environments, and the results are compared with the corresponding experiments.
A Monte Carlo pencil beam scanning model for proton treatment plan simulation using GATE/GEANT4
NASA Astrophysics Data System (ADS)
Grevillot, L.; Bertrand, D.; Dessy, F.; Freud, N.; Sarrut, D.
2011-08-01
This work proposes a generic method for modeling scanned ion beam delivery systems, without simulation of the treatment nozzle and based exclusively on beam data library (BDL) measurements required for treatment planning systems (TPS). To this aim, new tools dedicated to treatment plan simulation were implemented in the Gate Monte Carlo platform. The method was applied to a dedicated nozzle from IBA for proton pencil beam scanning delivery. Optical and energy parameters of the system were modeled using a set of proton depth-dose profiles and spot sizes measured at 27 therapeutic energies. For further validation of the beam model, specific 2D and 3D plans were produced and then measured with appropriate dosimetric tools. Dose contributions from secondary particles produced by nuclear interactions were also investigated using field size factor experiments. Pristine Bragg peaks were reproduced with 0.7 mm range and 0.2 mm spot size accuracy. A 32 cm range spread-out Bragg peak with 10 cm modulation was reproduced with 0.8 mm range accuracy and a maximum point-to-point dose difference of less than 2%. A 2D test pattern consisting of a combination of homogeneous and high-gradient dose regions passed a 2%/2 mm gamma index comparison for 97% of the points. In conclusion, the generic modeling method proposed for scanned ion beam delivery systems was applicable to an IBA proton therapy system. The key advantage of the method is that it only requires BDL measurements of the system. The validation tests performed so far demonstrated that the beam model achieves clinical performance, paving the way for further studies toward TPS benchmarking. The method involves new sources that are available in the new Gate release V6.1 and could be further applied to other particle therapy systems delivering protons or other types of ions like carbon.
NASA Astrophysics Data System (ADS)
Priezzhev, Alexander V.; Kirillin, Mikhail Y.; Lopatin, Vladimir V.
2002-05-01
Using angle-resolved Monte-Carlo simulation we obtained indicatrices of light scattering from a whole blood layer (suspension of erythrocytes at physiological hematocrit) for different wavelengths (514 and 633 nm) of incident light. We considered the problems of conformity of parameters describing the model medium and real investigated medium under experiment conditions (shape and size of particles, refractive indexes of particles and suspension, hematocrit). The anisotropy factor was numerically determined for various shapes of model erythrocytes: equivolumed spheres, chaotically oriented spheroids with axial ratio (epsilon) =0.25, and bi-concave disks. For calculations we used different approaches: the exact Mie theory for spheres; hybrid approximation, based on anomalous diffraction approach, for spheroids; and geometrical optics approximation for bi-concave disks and spheroids. The best fir for experimental results obtained at (lambda) =514 nm provided the calculations for the layer of erythrocytes, modeled by chaotically oriented spheroids, which phase functions were calculated by geometrical optics approximation (taking into account Franhofer diffraction). So we conclude that the average shape of erythrocyte in suspension is closer to the spheroid with axes ratio 0.25.
Titt, U; Sahoo, N; Ding, X; Zheng, Y; Newhauser, W D; Zhu, X R; Polf, J C; Gillin, M T; Mohan, R
2014-01-01
In recent years, the Monte Carlo method has been used in a large number of research studies in radiation therapy. For applications such as treatment planning, it is essential to validate the dosimetric accuracy of the Monte Carlo simulations in heterogeneous media. The AAPM Report no 105 addresses issues concerning clinical implementation of Monte Carlo based treatment planning for photon and electron beams, however for proton-therapy planning, such guidance is not yet available. Here we present the results of our validation of the Monte Carlo model of the double scattering system used at our Proton Therapy Center in Houston. In this study, we compared Monte Carlo simulated depth doses and lateral profiles to measured data for a magnitude of beam parameters. We varied simulated proton energies and widths of the spread-out Bragg peaks, and compared them to measurements obtained during the commissioning phase of the Proton Therapy Center in Houston. Of 191 simulated data sets, 189 agreed with measured data sets to within 3% of the maximum dose difference and within 3 mm of the maximum range or penumbra size difference. The two simulated data sets that did not agree with the measured data sets were in the distal falloff of the measured dose distribution, where large dose gradients potentially produce large differences on the basis of minute changes in the beam steering. Hence, the Monte Carlo models of medium- and large-size double scattering proton-therapy nozzles were valid for proton beams in the 100 MeV–250 MeV interval. PMID:18670050
NASA Astrophysics Data System (ADS)
Antanasijević, Davor; Pocajt, Viktor; Perić-Grujić, Aleksandra; Ristić, Mirjana
2014-11-01
This paper describes the training, validation, testing and uncertainty analysis of general regression neural network (GRNN) models for the forecasting of dissolved oxygen (DO) in the Danube River. The main objectives of this work were to determine the optimum data normalization and input selection techniques, the determination of the relative importance of uncertainty in different input variables, as well as the uncertainty analysis of model results using the Monte Carlo Simulation (MCS) technique. Min-max, median, z-score, sigmoid and tanh were validated as normalization techniques, whilst the variance inflation factor, correlation analysis and genetic algorithm were tested as input selection techniques. As inputs, the GRNN models used 19 water quality variables, measured in the river water each month at 17 different sites over a period of 9 years. The best results were obtained using min-max normalized data and the input selection based on the correlation between DO and dependent variables, which provided the most accurate GRNN model, and in combination the smallest number of inputs: Temperature, pH, HCO3-, SO42-, NO3-N, Hardness, Na, Cl-, Conductivity and Alkalinity. The results show that the correlation coefficient between measured and predicted DO values is 0.85. The inputs with the greatest effect on the GRNN model (arranged in descending order) were T, pH, HCO3-, SO42- and NO3-N. Of all inputs, variability of temperature had the greatest influence on the variability of DO content in river body, with the DO decreasing at a rate similar to the theoretical DO decreasing rate relating to temperature. The uncertainty analysis of the model results demonstrate that the GRNN can effectively forecast the DO content, since the distribution of model results are very similar to the corresponding distribution of real data.
NASA Astrophysics Data System (ADS)
Koda, Jun; Shapiro, P. R.
2007-12-01
Self-interacting dark matter (SIDM) has been proposed to solve the cuspy core problem of dark matter halos in standard CDM. There are two ways to investigate the effect of the 2-body, non-gravitational, elastic collisions of SIDM, Monte-Carlo N-body simulation and a conducting fluid model. The former is a gravitational N-body simulation with a Monte Carlo algorithm for the SIDM scattering that changes the direction of N-body particles randomly according to a given scattering cross section. The latter is a system of fluid conservation equations with a thermal conduction that describes the collisional effect, which was originally invented to describe the gravothermal collapse of globular clusters. Our previous work found a significant disagreement as regards the strength of collisionality required to solve cuspy core problem. However the two methods have not been properly tested against each other. Here, we make direct comparisons between Monte Carlo N-body simulations and analytic and numerical solutions of the conducting fluid (gaseous) model, for various isolated self-interacting dark matter halos. The N-body simulations reproduce the analytical self-similar solution of gravothermal collapse in the fluid model when one free parameter, the coefficient of heat conduction C, is chosen to be 0.75. The gravothermal collapse results of the simulations agrees well with our 1D numerical hydro solutions of the fluid model within 20% for other initial conditions, including Plummer model, Hernquist profile and NFW profile. In conclusion the conducting fluid model is in reasonably good agreement with the Monte Carlo simulations for isolated halos. We will pursue the origin of the reported disagreement between two methods in a cosmological environment by comparing new N-body simulations with fully cosmological initial conditions.
Lindoy, Lachlan P; Kolmann, Stephen J; D'Arcy, Jordan H; Crittenden, Deborah L; Jordan, Meredith J T
2015-11-21
Finite temperature quantum and anharmonic effects are studied in H2-Li(+)-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li(+)-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling-coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li(+)-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol(-1), respectively. PMID:26590532
NASA Astrophysics Data System (ADS)
Fischer, R.; Richardi, J.; Fries, P. H.; Krienke, H.
2002-11-01
Structural properties and energies of solvation are simulated for alkali and halide ions. The solvation structure is discussed in terms of various site-site distribution functions, of solvation numbers, and of orientational correlation functions of the solvent molecules around the ions. The solvent polarizability has notable effects which cannot be intuitively predicted. In particular, it is necessary to reproduce the experimental solvation numbers of small ions. The changes of solvation properties are investigated along the alkali and halide series. By comparing the solvation of ions in acetone to that in acetonitrile, it is shown that the spatial correlations among the solvent molecules around an ion result in a strong screening of the ion-solvent direct intermolecular potential and are essential to understand the changes in the solvation structures and energies between different solvents. The solvation properties derived from the simulations are compared to earlier predictions of the hypernetted chain (HNC) approximation of the molecular Ornstein-Zernike (MOZ) theory [J. Richardi, P. H. Fries, and H. Krienke, J. Chem. Phys. 108, 4079 (1998)]. The MOZ(HNC) formalism gives an overall qualitatively correct picture of the solvation and its various unexpected findings are corroborated. For the larger ions, its predictions become quantitative. The MOZ approach allows to calculate solvent-solvent and ion-solvent potentials of mean force, which shed light on the 3D labile molecular and ionic architectures in the solution. These potentials of mean force convey a unique information which is necessary to fully interpret the angle-averaged structural functions computed from the simulations. Finally, simulations of solutions at finite concentrations show that the solvent-solvent and ion-solvent spatial correlations at infinite dilution are marginally altered by the introduction of fair amounts of ions.
NASA Astrophysics Data System (ADS)
Yang, Shihai; Pandey, Ras B.
2007-04-01
Using a bond fluctuating model (BFM), Monte Carlo simulations are performed to study the film growth in a mixture of reactive hydrophobic (H) and hydrophilic (P) groups in a simultaneous reactive and evaporating aqueous (A) solution on a simple three dimensional lattice. In addition to the excluded volume, short range phenomenological interactions among each constituents and kinetic functionalities are used to capture their major characteristics. The simulation involves thermodynamic equilibration via stochastic movement of each constituent by Metropolis algorithm as well as cross-linking reaction among constituents with evaporating aqueous component. The film thickness (h) and its interface width (W) are examined with a reactive aqueous solvent for a range of temperatures (T). Results are compared with a previous study [Yang et al. Macromol. Theory Simul. 15, 263 (2006)] with an effective bond fluctuation model (EBFM). Simulation data show a much slower power-law growth for h and W with BFM than that with EBFM. With BFM, growth of the film thickness can be described by h ∝tγ, with a typical value γ1≈0.97 in initial time regime followed by γ2≈0.77 at T =5, for example. Growth of the interface width can also be described by a power law, W ∝tβ, with β1≈0.40 initially and β2≈0.25 in later stage. Corresponding values of the exponents with EBFM are much higher, i.e., γ1≈1.84, γ2≈1.34 and β1≈1.05, β2≈0.60 at T =5. Correct restrictions on the bond length with the excluded volume used with BFM are found to have a greater effect on steady-state film thickness (hs) and the interface width (Ws) at low temperatures than that at high temperatures. The relaxation patterns of the interface width with BFM seem to change noticeably from those with EBFM. A better relaxed film with a smoother surface is thus achieved by the improved cross-linking covalent bond fluctuation model which is more realistic in capturing appropriate details of systems such as
Monte Carlo simulation of an expanding gas
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1989-01-01
By application of simple computer graphics techniques, the statistical performance of two Monte Carlo methods used in the simulation of rarefied gas flows are assessed. Specifically, two direct simulation Monte Carlo (DSMC) methods developed by Bird and Nanbu are considered. The graphics techniques are found to be of great benefit in the reduction and interpretation of the large volume of data generated, thus enabling important conclusions to be drawn about the simulation results. Hence, it is discovered that the method of Nanbu suffers from increased statistical fluctuations, thereby prohibiting its use in the solution of practical problems.
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
Yu Maolin; Du, R.
2005-08-05
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.
Modelling dose distribution in tubing and cable using CYLTRAN and ACCEPT Monte Carlo simulation code
Weiss, D.E.; Kensek, R.P.
1993-12-31
One of the difficulties in the irradiation of non-slab geometries, such as a tube, is the uneven penetration of the electrons. A simple model of the distribution of dose in a tube or cable in relationship to voltage, composition, wall thickness and diameter can be mapped using the cylinder geometry provided for in the ITS/CYLTRAN code, complete with automatic subzoning. The reality of more complex 3D geometry to include effects of window foil, backscattering fixtures and beam scanning angles can be more completely accounted for by using the ITS/ACCEPT code with a line source update and a system of intersecting wedges to define input zones for mapping dose distributions in a tube. Thus, all of the variables that affect dose distribution can be modelled without the need to run time consuming and costly factory experiments. The effects of composition changes on dose distribution can also be anticipated.
Monte Carlo simulation of intercalated carbon nanotubes.
Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter
2007-01-01
Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms. PMID:17033783
NASA Astrophysics Data System (ADS)
Lazaro, D.; Buvat, I.; Loudos, G.; Strul, D.; Santin, G.; Giokaris, N.; Donnarieix, D.; Maigne, L.; Spanoudaki, V.; Styliaris, S.; Staelens, S.; Breton, V.
2004-01-01
Monte Carlo simulations are increasingly used in scintigraphic imaging to model imaging systems and to develop and assess tomographic reconstruction algorithms and correction methods for improved image quantitation. GATE (GEANT4 application for tomographic emission) is a new Monte Carlo simulation platform based on GEANT4 dedicated to nuclear imaging applications. This paper describes the GATE simulation of a prototype of scintillation camera dedicated to small-animal imaging and consisting of a CsI(Tl) crystal array coupled to a position-sensitive photomultiplier tube. The relevance of GATE to model the camera prototype was assessed by comparing simulated 99mTc point spread functions, energy spectra, sensitivities, scatter fractions and image of a capillary phantom with the corresponding experimental measurements. Results showed an excellent agreement between simulated and experimental data: experimental spatial resolutions were predicted with an error less than 100 µm. The difference between experimental and simulated system sensitivities for different source-to-collimator distances was within 2%. Simulated and experimental scatter fractions in a [98-182 keV] energy window differed by less than 2% for sources located in water. Simulated and experimental energy spectra agreed very well between 40 and 180 keV. These results demonstrate the ability and flexibility of GATE for simulating original detector designs. The main weakness of GATE concerns the long computation time it requires: this issue is currently under investigation by the GEANT4 and the GATE collaborations.
Lazaro, D; Buvat, I; Loudos, G; Strul, D; Santin, G; Giokaris, N; Donnarieix, D; Maigne, L; Spanoudaki, V; Styliaris, S; Staelens, S; Breton, V
2004-01-21
Monte Carlo simulations are increasingly used in scintigraphic imaging to model imaging systems and to develop and assess tomographic reconstruction algorithms and correction methods for improved image quantitation. GATE (GEANT4 application for tomographic emission) is a new Monte Carlo simulation platform based on GEANT4 dedicated to nuclear imaging applications. This paper describes the GATE simulation of a prototype of scintillation camera dedicated to small-animal imaging and consisting of a CsI(Tl) crystal array coupled to a position-sensitive photomultiplier tube. The relevance of GATE to model the camera prototype was assessed by comparing simulated 99mTc point spread functions, energy spectra, sensitivities, scatter fractions and image of a capillary phantom with the corresponding experimental measurements. Results showed an excellent agreement between simulated and experimental data: experimental spatial resolutions were predicted with an error less than 100 microns. The difference between experimental and simulated system sensitivities for different source-to-collimator distances was within 2%. Simulated and experimental scatter fractions in a [98-182 keV] energy window differed by less than 2% for sources located in water. Simulated and experimental energy spectra agreed very well between 40 and 180 keV. These results demonstrate the ability and flexibility of GATE for simulating original detector designs. The main weakness of GATE concerns the long computation time it requires: this issue is currently under investigation by the GEANT4 and the GATE collaborations. PMID:15083671
NASA Astrophysics Data System (ADS)
Jalayer, Fatemeh; Ebrahimian, Hossein
2014-05-01
Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the
Monte Carlo simulation framework for TMT
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos; Angeli, George Z.
2008-07-01
This presentation describes a strategy for assessing the performance of the Thirty Meter Telescope (TMT). A Monte Carlo Simulation Framework has been developed to combine optical modeling with Computational Fluid Dynamics simulations (CFD), Finite Element Analysis (FEA) and controls to model the overall performance of TMT. The framework consists of a two year record of observed environmental parameters such as atmospheric seeing, site wind speed and direction, ambient temperature and local sunset and sunrise times, along with telescope azimuth and elevation with a given sampling rate. The modeled optical, dynamic and thermal seeing aberrations are available in a matrix form for distinct values within the range of influencing parameters. These parameters are either part of the framework parameter set or can be derived from them at each time-step. As time advances, the aberrations are interpolated and combined based on the current value of their parameters. Different scenarios can be generated based on operating parameters such as venting strategy, optical calibration frequency and heat source control. Performance probability distributions are obtained and provide design guidance. The sensitivity of the system to design, operating and environmental parameters can be assessed in order to maximize the % of time the system meets the performance specifications.
Monte Carlo simulations of phosphate polyhedron connectivity in glasses
ALAM,TODD M.
2000-01-01
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses
ALAM,TODD M.
1999-12-21
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
Monte Carlo simulation of chromatin stretching.
Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg
2006-04-01
We present Monte Carlo (MC) simulations of the stretching of a single chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90 degrees to 130 degrees increases the stiffness significantly. An increase in the opening angle from 22 degrees to 34 degrees leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending. PMID:16711856
Monte Carlo simulation of chromatin stretching
NASA Astrophysics Data System (ADS)
Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg
2006-04-01
We present Monte Carlo (MC) simulations of the stretching of a single 30nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90° to 130° increases the stiffness significantly. An increase in the opening angle from 22° to 34° leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending.
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Monte Carlo Simulation of Critical Casimir Forces
NASA Astrophysics Data System (ADS)
Vasilyev, Oleg A.
2015-03-01
In the vicinity of the second order phase transition point long-range critical fluctuations of the order parameter appear. The second order phase transition in a critical binary mixture in the vicinity of the demixing point belongs to the universality class of the Ising model. The superfluid transition in liquid He belongs to the universality class of the XY model. The confinement of long-range fluctuations causes critical Casimir forces acting on confining surfaces or particles immersed in the critical substance. Last decade critical Casimir forces in binary mixtures and liquid helium were studied experimentally. The critical Casimir force in a film of a given thickness scales as a universal scaling function of the ratio of the film thickness to the bulk correlation length divided over the cube of the film thickness. Using Monte Carlo simulations we can compute critical Casimir forces and their scaling functions for lattice Ising and XY models which correspond to experimental results for the binary mixture and liquid helium, respectively. This chapter provides the description of numerical methods for computation of critical Casimir interactions for lattice models for plane-plane, plane-particle, and particle-particle geometries.
A multicomb variance reduction scheme for Monte Carlo semiconductor simulators
Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.
1998-04-01
The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.
Lattice Monte Carlo simulations of polymer melts
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping
2014-12-01
We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.
Pothoczki, Szilvia Temleitner, László; Pusztai, László
2014-02-07
Synchrotron X-ray diffraction measurements have been conducted on liquid phosphorus trichloride, tribromide, and triiodide. Molecular Dynamics simulations for these molecular liquids were performed with a dual purpose: (1) to establish whether existing intermolecular potential functions can provide a picture that is consistent with diffraction data and (2) to generate reliable starting configurations for subsequent Reverse Monte Carlo modelling. Structural models (i.e., sets of coordinates of thousands of atoms) that were fully consistent with experimental diffraction information, within errors, have been prepared by means of the Reverse Monte Carlo method. Comparison with reference systems, generated by hard sphere-like Monte Carlo simulations, was also carried out to demonstrate the extent to which simple space filling effects determine the structure of the liquids (and thus, also estimating the information content of measured data). Total scattering structure factors, partial radial distribution functions and orientational correlations as a function of distances between the molecular centres have been calculated from the models. In general, more or less antiparallel arrangements of the primary molecular axes that are found to be the most favourable orientation of two neighbouring molecules. In liquid PBr{sub 3} electrostatic interactions seem to play a more important role in determining intermolecular correlations than in the other two liquids; molecular arrangements in both PCl{sub 3} and PI{sub 3} are largely driven by steric effects.
NASA Astrophysics Data System (ADS)
Pothoczki, Szilvia; Temleitner, László; Pusztai, László
2014-02-01
Synchrotron X-ray diffraction measurements have been conducted on liquid phosphorus trichloride, tribromide, and triiodide. Molecular Dynamics simulations for these molecular liquids were performed with a dual purpose: (1) to establish whether existing intermolecular potential functions can provide a picture that is consistent with diffraction data and (2) to generate reliable starting configurations for subsequent Reverse Monte Carlo modelling. Structural models (i.e., sets of coordinates of thousands of atoms) that were fully consistent with experimental diffraction information, within errors, have been prepared by means of the Reverse Monte Carlo method. Comparison with reference systems, generated by hard sphere-like Monte Carlo simulations, was also carried out to demonstrate the extent to which simple space filling effects determine the structure of the liquids (and thus, also estimating the information content of measured data). Total scattering structure factors, partial radial distribution functions and orientational correlations as a function of distances between the molecular centres have been calculated from the models. In general, more or less antiparallel arrangements of the primary molecular axes that are found to be the most favourable orientation of two neighbouring molecules. In liquid PBr3 electrostatic interactions seem to play a more important role in determining intermolecular correlations than in the other two liquids; molecular arrangements in both PCl3 and PI3 are largely driven by steric effects.
Liang, Shuhua; Alvarez, Gonzalo; Şen, Cengiz; Moreo, Adriana; Dagotto, Elbio
2012-07-27
An undoped three-orbital spin-fermion model for the Fe-based superconductors is studied via Monte Carlo techniques in two-dimensional clusters. At low temperatures, the magnetic and one-particle spectral properties are in agreement with neutron and photoemission experiments. Our main results are the resistance versus temperature curves that display the same features observed in BaFe(2)As(2) detwinned single crystals (under uniaxial stress), including a low-temperature anisotropy between the two directions followed by a peak at the magnetic ordering temperature, that qualitatively appears related to short-range spin order and concomitant Fermi surface orbital order. PMID:23006104
NASA Astrophysics Data System (ADS)
Cassidy, Jeffrey; Betz, Vaughn; Lilge, Lothar
2015-02-01
Monte Carlo (MC) simulation is recognized as the “gold standard” for biophotonic simulation, capturing all relevant physics and material properties at the perceived cost of high computing demands. Tetrahedral-mesh-based MC simulations particularly are attractive due to the ability to refine the mesh at will to conform to complicated geometries or user-defined resolution requirements. Since no approximations of material or light-source properties are required, MC methods are applicable to the broadest set of biophotonic simulation problems. MC methods also have other implementation features including inherent parallelism, and permit a continuously-variable quality-runtime tradeoff. We demonstrate here a complete MC-based prospective fluence dose evaluation system for interstitial PDT to generate dose-volume histograms on a tetrahedral mesh geometry description. To our knowledge, this is the first such system for general interstitial photodynamic therapy employing MC methods and is therefore applicable to a very broad cross-section of anatomy and material properties. We demonstrate that evaluation of dose-volume histograms is an effective variance-reduction scheme in its own right which greatly reduces the number of packets required and hence runtime required to achieve acceptable result confidence. We conclude that MC methods are feasible for general PDT treatment evaluation and planning, and considerably less costly than widely believed.
Monte Carlo simulations of lattice gauge theories
Rebbi, C
1980-02-01
Monte Carlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of Monte Carlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few Monte Carlo iterations and the internal energy is measured; mixed-phase runs, where several Monte Carlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)
Bernal, M A; Bordage, M C; Brown, J M C; Davídková, M; Delage, E; El Bitar, Z; Enger, S A; Francis, Z; Guatelli, S; Ivanchenko, V N; Karamitros, M; Kyriakou, I; Maigne, L; Meylan, S; Murakami, K; Okada, S; Payno, H; Perrot, Y; Petrovic, I; Pham, Q T; Ristic-Fira, A; Sasaki, T; Štěpán, V; Tran, H N; Villagrasa, C; Incerti, S
2015-12-01
Understanding the fundamental mechanisms involved in the induction of biological damage by ionizing radiation remains a major challenge of today's radiobiology research. The Monte Carlo simulation of physical, physicochemical and chemical processes involved may provide a powerful tool for the simulation of early damage induction. The Geant4-DNA extension of the general purpose Monte Carlo Geant4 simulation toolkit aims to provide the scientific community with an open source access platform for the mechanistic simulation of such early damage. This paper presents the most recent review of the Geant4-DNA extension, as available to Geant4 users since June 2015 (release 10.2 Beta). In particular, the review includes the description of new physical models for the description of electron elastic and inelastic interactions in liquid water, as well as new examples dedicated to the simulation of physicochemical and chemical stages of water radiolysis. Several implementations of geometrical models of biological targets are presented as well, and the list of Geant4-DNA examples is described. PMID:26653251
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm(-3) and 1.1 g cm(-3) occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems. PMID:27300449
NASA Astrophysics Data System (ADS)
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm‑3 and 1.1 g cm‑3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; DeMarco, John J.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain
NASA Astrophysics Data System (ADS)
Lin, J. Y. Y.; Aczel, A. A.; Abernathy, D. L.; Nagler, S. E.; Buyers, W. J. L.; Granroth, G. E.
2014-04-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of-flight chopper spectrometers [A. A. Aczel et al., Nat. Commun. 3, 1124 (2012), 10.1038/ncomms2117]. These modes are well described by three-dimensional isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states, and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature-dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Lin, J. Y. Y.; Aczel, Adam A; Abernathy, Douglas L; Nagler, Stephen E; Buyers, W. J. L.; Granroth, Garrett E
2014-01-01
Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.
Monte Carlo simulation of Alaska wolf survival
NASA Astrophysics Data System (ADS)
Feingold, S. J.
1996-02-01
Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.
Numerical reproducibility for implicit Monte Carlo simulations
Cleveland, M.; Brunner, T.; Gentile, N.
2013-07-01
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. In [1], a way of eliminating this roundoff error using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. A non-arbitrary precision approaches required a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step. (authors)
Monte Carlo simulations in Nuclear Medicine
Loudos, George K.
2007-11-26
Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. Monte Carlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.
Monte Carlo simulations in Nuclear Medicine
NASA Astrophysics Data System (ADS)
Loudos, George K.
2007-11-01
Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. Monte Carlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Representation and simulation for pyrochlore lattice via Monte Carlo technique
NASA Astrophysics Data System (ADS)
Passos, André Luis; de Albuquerque, Douglas F.; Filho, João Batista Santos
2016-05-01
This work presents a representation of the Kagome and pyrochlore lattices using Monte Carlo simulation as well as some results of the critical properties. These lattices are composed corner sharing triangles and tetrahedrons respectively. The simulation was performed employing the Cluster Wolf Algorithm for the spin updates through the standard ferromagnetic Ising Model. The determination of the critical temperature and exponents was based on the Histogram Technique and the Finite-Size Scaling Theory.
Chang, Qiang; Herbst, Eric
2014-06-01
We have designed an improved algorithm that enables us to simulate the chemistry of cold dense interstellar clouds with a full gas-grain reaction network. The chemistry is treated by a unified microscopic-macroscopic Monte Carlo approach that includes photon penetration and bulk diffusion. To determine the significance of these two processes, we simulate the chemistry with three different models. In Model 1, we use an exponential treatment to follow how photons penetrate and photodissociate ice species throughout the grain mantle. Moreover, the products of photodissociation are allowed to diffuse via bulk diffusion and react within the ice mantle. Model 2 is similar to Model 1 but with a slower bulk diffusion rate. A reference Model 0, which only allows photodissociation reactions to occur on the top two layers, is also simulated. Photodesorption is assumed to occur from the top two layers in all three models. We found that the abundances of major stable species in grain mantles do not differ much among these three models, and the results of our simulation for the abundances of these species agree well with observations. Likewise, the abundances of gas-phase species in the three models do not vary. However, the abundances of radicals in grain mantles can differ by up to two orders of magnitude depending upon the degree of photon penetration and the bulk diffusion of photodissociation products. We also found that complex molecules can be formed at temperatures as low as 10 K in all three models.
Shell model Monte Carlo methods
Koonin, S.E.; Dean, D.J.
1996-10-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
NASA Astrophysics Data System (ADS)
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
Maier, Thomas A; Alvarez, Gonzalo; Summers, Michael Stuart; Schulthess, Thomas C
2010-01-01
Using dynamic cluster quantum Monte Carlo simulations, we study the superconducting behavior of a 1=8 doped two-dimensional Hubbard model with imposed unidirectional stripelike charge-density-wave modulation. We find a significant increase of the pairing correlations and critical temperature relative to the homogeneous system when the modulation length scale is sufficiently large. With a separable form of the irreducible particle-particle vertex, we show that optimized superconductivity is obtained for a moderate modulation strength due to a delicate balance between the modulation enhanced pairing interaction, and a concomitant suppression of the bare particle-particle excitations by a modulation reduction of the quasiparticle weight.
Monte-Carlo simulation of Callisto's exosphere
NASA Astrophysics Data System (ADS)
Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.
2015-12-01
We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.
[Monte Carlo simulation of FCS in a laser gradient field].
Chen, B; Meng, F; Ma, H; Ding, Y; Jin, L; Chen, D
2001-06-01
Fluorescence correlation spectroscopy (FCS) is a powerful tool for probing biological process inside living cells. It measures fluorescence fluctuations of small number of molecules and derive information on molecular kinetics and reactions. We have developed a Monte Carlo model to simulate Browning motion of Rayleigh particles in a laser gradient field. The simulation reveals relations between laser field strength and measured parameters from FCS, such as diffusion coefficient and number density of the particles. The simulated results agree qualitatively to the experimental results obtained using fluorescent spheres. Empirical relations from the simulation are also discussed. PMID:12947641
Coherent Scattering Imaging Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Hassan, Laila Abdulgalil Rafik
Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness
Zhang, Minhua; Chen, Lihang; Yang, Huaming; Sha, Xijiang; Ma, Jing
2016-07-01
Gibbs ensemble Monte Carlo simulation with configurational bias was employed to study the vapor-liquid equilibrium (VLE) for pure acetic acid and for a mixture of acetic acid and ethylene. An improved united-atom force field for acetic acid based on a Lennard-Jones functional form was proposed. The Lennard-Jones well depth and size parameters for the carboxyl oxygen and hydroxyl oxygen were determined by fitting the interaction energies of acetic acid dimers to the Lennard-Jones potential function. Four different acetic acid dimers and the proportions of them were considered when the force field was optimized. It was found that the new optimized force field provides a reasonable description of the vapor-liquid phase equilibrium for pure acetic acid and for the mixture of acetic acid and ethylene. Accurate values were obtained for the saturated liquid density of the pure compound (average deviation: 0.84 %) and for the critical points. The new optimized force field demonstrated greater accuracy and reliability in calculations of the solubility of the mixture of acetic acid and ethylene as compared with the results obtained with the original TraPPE-UA force field. PMID:27324633
NASA Astrophysics Data System (ADS)
WöHling, Thomas; Vrugt, Jasper A.
2011-04-01
In the past two decades significant progress has been made toward the application of inverse modeling to estimate the water retention and hydraulic conductivity functions of the vadose zone at different spatial scales. Many of these contributions have focused on estimating only a few soil hydraulic parameters, without recourse to appropriately capturing and addressing spatial variability. The assumption of a homogeneous medium significantly simplifies the complexity of the resulting inverse problem, allowing the use of classical parameter estimation algorithms. Here we present an inverse modeling study with a high degree of vertical complexity that involves calibration of a 25 parameter Richards'-based HYDRUS-1D model using in situ measurements of volumetric water content and pressure head from multiple depths in a heterogeneous vadose zone in New Zealand. We first determine the trade-off in the fitting of both data types using the AMALGAM multiple objective evolutionary search algorithm. Then we adopt a Bayesian framework and derive posterior probability density functions of parameter and model predictive uncertainty using the recently developed differential evolution adaptive metropolis, DREAMZS adaptive Markov chain Monte Carlo scheme. We use four different formulations of the likelihood function each differing in their underlying assumption about the statistical properties of the error residual and data used for calibration. We show that AMALGAM and DREAMZS can solve for the 25 hydraulic parameters describing the water retention and hydraulic conductivity functions of the multilayer heterogeneous vadose zone. Our study clearly highlights that multiple data types are simultaneously required in the likelihood function to result in an accurate soil hydraulic characterization of the vadose zone of interest. Remaining error residuals are most likely caused by model deficiencies that are not encapsulated by the multilayer model and can not be accessed by the
Monte Carlo simulations of medical imaging modalities
Estes, G.P.
1998-09-01
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.
Monte-Carlo Simulation Balancing in Practice
NASA Astrophysics Data System (ADS)
Huang, Shih-Chieh; Coulom, Rémi; Lin, Shun-Shii
Simulation balancing is a new technique to tune parameters of a playout policy for a Monte-Carlo game-playing program. So far, this algorithm had only been tested in a very artificial setting: it was limited to 5×5 and 6×6 Go, and required a stronger external program that served as a supervisor. In this paper, the effectiveness of simulation balancing is demonstrated in a more realistic setting. A state-of-the-art program, Erica, learned an improved playout policy on the 9×9 board, without requiring any external expert to provide position evaluations. The evaluations were collected by letting the program analyze positions by itself. The previous version of Erica learned pattern weights with the minorization-maximization algorithm. Thanks to simulation balancing, its playing strength was improved from a winning rate of 69% to 78% against Fuego 0.4.
ERIC Educational Resources Information Center
Hannan, Peter J.; Murray, David M.
1996-01-01
A Monte Carlo study compared performance of linear and logistic mixed-model analyses of simulated community trials having specific event rates, intraclass correlations, and degrees of freedom. Results indicate that in studies with adequate denominator degrees of freedom, the researcher may use either method of analysis, with certain cautions. (SLD)
NASA Astrophysics Data System (ADS)
Eising, G.; Kooi, B. J.
2012-06-01
Growth and decay of clusters at temperatures below Tc have been studied for a two-dimensional Ising model for both square and triangular lattices using Monte Carlo (MC) simulations and the enumeration of lattice animals. For the lattice animals, all unique cluster configurations with their internal bonds were identified up to 25 spins for the triangular lattice and up to 29 spins for the square lattice. From these configurations, the critical cluster sizes for nucleation have been determined based on two (thermodynamic) definitions. From the Monte Carlo simulations, the critical cluster size is also obtained by studying the decay and growth of inserted, most compact clusters of different sizes. A good agreement is found between the results from the MC simulations and one of the definitions of critical size used for the lattice animals at temperatures T > ˜0.4 Tc for the square lattice and T > ˜0.2 Tc for the triangular lattice (for the range of external fields H considered). At low temperatures (T ≈ 0.2 Tc for the square lattice and T ≈ 0.1 Tc for the triangular lattice), magic numbers are found in the size distributions during the MC simulations. However, these numbers are not present in the critical cluster sizes based on the MC simulations, as they are present for the lattice animal data. In order to achieve these magic numbers in the critical cluster sizes based on the MC simulation, the temperature has to be reduced further to T ≈ 0.15 Tc for the square lattice. The observed evolution of magic numbers as a function of temperature is rationalized in the present work.
Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.
Leigh, Jessica W; Bryant, David
2015-09-01
Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology. PMID:26012871
A tetrahedron-based inhomogeneous Monte Carlo optical simulator
Shen, H; Wang, G
2010-01-01
Optical imaging has been widely applied in preclinical and clinical applications. Fifteen years ago, an efficient Monte Carlo program ‘MCML’ was developed for use with multi-layered turbid media and has gained popularity in the field of biophotonics. Currently, there is an increasingly pressing need for simulating tools more powerful than MCML in order to study light propagation phenomena in complex inhomogeneous objects, such as the mouse. Here we report a tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIM-OS) to address this issue. By modeling an object as a tetrahedron-based inhomogeneous finite-element mesh, TIM-OS can determine the photon– triangle interaction recursively and rapidly. In numerical simulation, we have demonstrated the correctness and efficiency of TIM-OS. PMID:20090182
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Coherent scatter imaging Monte Carlo simulation.
Hassan, Laila; MacDonald, Carolyn A
2016-07-01
Conventional mammography can suffer from poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter slot scan imaging is an imaging technique which provides additional information and is compatible with conventional mammography. A Monte Carlo simulation of coherent scatter slot scan imaging was performed to assess its performance and provide system optimization. Coherent scatter could be exploited using a system similar to conventional slot scan mammography system with antiscatter grids tilted at the characteristic angle of cancerous tissues. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The simulated carcinomas were detectable for tumors as small as 5 mm in diameter, so coherent scatter analysis using a wide-slot setup could be promising as an enhancement for screening mammography. Employing coherent scatter information simultaneously with conventional mammography could yield a conventional high spatial resolution image with additional coherent scatter information. PMID:27610397
Kinetic Monte Carlo simulations of proton conductivity
NASA Astrophysics Data System (ADS)
Masłowski, T.; Drzewiński, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Frączek, M.; Nordlund, K.; Kuronen, A.
2014-07-01
The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Papadimitroulas, P; Kagadis, GC; Loudos, G
2014-06-15
Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and
Mohammadyari, P; Faghihi, R; Shirazi, M Mosleh; Lotfi, M; Meigooni, A
2014-06-01
Purpose: the accuboost is the most modern method of breast brachytherapy that is a boost method in compressed tissue by a mammography unit. the dose distribution in uncompressed tissue, as compressed tissue is important that should be characterized. Methods: In this study, the mechanical behavior of breast in mammography loading, the displacement of breast tissue and the dose distribution in compressed and uncompressed tissue, are investigated. Dosimetry was performed by two dosimeter methods of Monte Carlo simulations using MCNP5 code and thermoluminescence dosimeters. For Monte Carlo simulations, the dose values in cubical lattice were calculated using tally F6. The displacement of the breast elements was simulated by Finite element model and calculated using ABAQUS software, from which the 3D dose distribution in uncompressed tissue was determined. The geometry of the model is constructed from MR images of 6 volunteers. Experimental dosimetery was performed by placing the thermoluminescence dosimeters into the polyvinyl alcohol breast equivalent phantom and on the proximal edge of compression plates to the chest. Results: The results indicate that using the cone applicators would deliver more than 95% of dose to the depth of 5 to 17mm, while round applicator will increase the skin dose. Nodal displacement, in presence of gravity and 60N forces, i.e. in mammography compression, was determined with 43% contraction in the loading direction and 37% expansion in orthogonal orientation. Finally, in comparison of the acquired from thermoluminescence dosimeters with MCNP5, they are consistent with each other in breast phantom and in chest's skin with average different percentage of 13.7±5.7 and 7.7±2.3, respectively. Conclusion: The major advantage of this kind of dosimetry is the ability of 3D dose calculation by FE Modeling. Finally, polyvinyl alcohol is a reliable material as a breast tissue equivalent dosimetric phantom that provides the ability of TLD dosimetry
Monte Carlo simulation of electron swarm parameters in O2
NASA Astrophysics Data System (ADS)
Settaouti, A.; Settaouti, L.
2007-03-01
Oxygen plasmas have found numerous applications in plasma processing, such as reactive sputtering, dry etching of polymers, oxidation, and resist removal of semiconductors. Swarm and transport coefficients are essential for better understanding and modelling of these gas discharge processes. The electron swarms in a gas under the influence of an electric field can be simulated with the help of a Monte Carlo method. The swarm parameters evaluated are compared with experimental results.
Choi, Myunghee; Chan, Vincent S.
2014-02-28
This final report describes the work performed under U.S. Department of Energy Cooperative Agreement DE-FC02-08ER54954 for the period April 1, 2011 through March 31, 2013. The goal of this project was to perform iterated finite-orbit Monte Carlo simulations with full-wall fields for modeling tokamak ICRF wave heating experiments. In year 1, the finite-orbit Monte-Carlo code ORBIT-RF and its iteration algorithms with the full-wave code AORSA were improved to enable systematical study of the factors responsible for the discrepancy in the simulated and the measured fast-ion FIDA signals in the DIII-D and NSTX ICRF fast-wave (FW) experiments. In year 2, ORBIT-RF was coupled to the TORIC full-wave code for a comparative study of ORBIT-RF/TORIC and ORBIT-RF/AORSA results in FW experiments.
McNally, Kevin; Cotton, Richard; Cocker, John; Jones, Kate; Bartels, Mike; Rick, David; Price, Paul; Loizou, George
2012-01-01
There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure. PMID:22719759
NASA Astrophysics Data System (ADS)
Wilson, J. A.; Richardson, J. A.
2015-12-01
Traditional methods used to calculate recurrence rate of volcanism, such as linear regression, maximum likelihood and Weibull-Poisson distributions, are effective at estimating recurrence rate and confidence level, but these methods are unable to estimate uncertainty in recurrence rate through time. We propose a new model for estimating recurrence rate and uncertainty, Volcanic Event Recurrence Rate Model. VERRM is an algorithm that incorporates radiometric ages, volcanic stratigraphy and paleomagnetic data into a Monte Carlo simulation, generating acceptable ages for each event. Each model run is used to calculate recurrence rate using a moving average window. These rates are binned into discrete time intervals and plotted using the 5th, 50th and 95th percentiles. We present recurrence rates from Cima Volcanic Field (CA), Yucca Mountain (NV) and Arsia Mons (Mars). Results from Cima Volcanic Field illustrate how several K-Ar ages with large uncertainties obscure three well documented volcanic episodes. Yucca Mountain results are similar to published rates and illustrate the use of using the same radiometric age for multiple events in a spatially defined cluster. Arsia Mons results show a clear waxing/waning of volcanism through time. VERRM output may be used for a spatio-temporal model or to plot uncertainty in quantifiable parameters such as eruption volume or geochemistry. Alternatively, the algorithm may be reworked to constrain geomagnetic chrons. VERRM is implemented in Python 2.7 and takes advantage of NumPy, SciPy and matplotlib libraries for optimization and quality plotting presentation. A typical Monte Carlo simulation of 40 volcanic events takes a few minutes to couple hours to complete, depending on the bin size used to assign ages.
Wiebe, J; Ploquin, N
2014-08-15
Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4 (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.
NASA Technical Reports Server (NTRS)
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
Quantum Monte Carlo simulations in novel geometries
NASA Astrophysics Data System (ADS)
Iglovikov, Vladimir
Quantum Monte Carlo simulations are giving increasing insight into the physics of strongly interacting bosons, spins, and fermions. Initial work focused on the simplest geometries, like a 2D square lattice. Increasingly, modern research is turning to more rich structures such as honeycomb lattice of graphene, the Lieb lattice of the CuO2 planes of cuprate superconductors, the triangular lattice, and coupled layers. These new geometries possess unique features which affect the physics in profound ways, eg a vanishing density of states and relativistic dispersion ("Dirac point'') of a honeycomb lattice, frustration on a triangular lattice, and a flat bands on a Lieb lattice. This thesis concerns both exploring the performance of QMC algorithms on different geometries(primarily via the "sign problem'') and also applying those algorithms to several interesting open problems.
Resist develop prediction by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun
2002-07-01
Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
Leonhard, Kai; Prausnitz, John M.; Radke, Clayton J.
2004-01-01
Amino acid residue–solvent interactions are required for lattice Monte Carlo simulations of model proteins in water. In this study, we propose an interaction-energy scale that is based on the interaction scale by Miyazawa and Jernigan. It permits systematic variation of the amino acid–solvent interactions by introducing a contrast parameter for the hydrophobicity, Cs, and a mean attraction parameter for the amino acids, ω. Changes in the interaction energies strongly affect many protein properties. We present an optimized energy parameter set for best representing realistic behavior typical for many proteins (fast folding and high cooperativity for single chains). Our optimal parameters feature a much weaker hydrophobicity contrast and mean attraction than does the original interaction scale. The proposed interaction scale is designed for calculating the behavior of proteins in bulk and at interfaces as a function of solvent characteristics, as well as protein size and sequence. PMID:14739322
NASA Astrophysics Data System (ADS)
Shi, Feng; Wang, Dezhen; Ren, Chunsheng
2008-06-01
Atmospheric pressure discharge nonequilibrium plasmas have been applied to plasma processing with modern technology. Simulations of discharge in pure Ar and pure He gases at one atmospheric pressure by a high voltage trapezoidal nanosecond pulse have been performed using a one-dimensional particle-in-cell Monte Carlo collision (PIC-MCC) model coupled with a renormalization and weighting procedure (mapping algorithm). Numerical results show that the characteristics of discharge in both inert gases are very similar. There exist the effects of local reverse field and double-peak distributions of charged particles' density. The electron and ion energy distribution functions are also observed, and the discharge is concluded in the view of ionization avalanche in number. Furthermore, the independence of total current density is a function of time, but not of position.
NASA Astrophysics Data System (ADS)
Gratiy, Sergey L.; Walker, Andrew C.; Levin, Deborah A.; Goldstein, David B.; Varghese, Philip L.; Trafton, Laurence M.; Moore, Chris H.
2010-05-01
Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. Improving upon previous models, Walker et al. (Walker, A.C., Gratiy, S.L., Levin, D.A., Goldstein, D.B., Varghese, P.L., Trafton, L.M., Moore, C.H., Stewart, B. [2009]. Icarus) developed a fully 3-D global rarefied gas dynamics model of Io's atmosphere including both sublimation and volcanic sources of SO 2 gas. The fidelity of the model is tested by simulating remote observations at selected wavelength bands and comparing them to the corresponding astronomical observations of Io's atmosphere. The simulations are performed with a new 3-D spherical-shell radiative transfer code utilizing a backward Monte Carlo method. We present: (1) simulations of the mid-infrared disk-integrated spectra of Io's sunlit hemisphere at 19 μm, obtained with TEXES during 2001-2004; (2) simulations of disk-resolved images at Lyman- α obtained with the Hubble Space Telescope (HST), Space Telescope Imaging Spectrograph (STIS) during 1997-2001; and (3) disk-integrated simulations of emission line profiles in the millimeter wavelength range obtained with the IRAM-30 m telescope in October-November 1999. We found that the atmospheric model generally reproduces the longitudinal variation in band depth from the mid-infrared data; however, the best match is obtained when our simulation results are shifted ˜30° toward lower orbital longitudes. The simulations of Lyman- α images do not reproduce the mid-to-high latitude bright patches seen in the observations, suggesting that the model atmosphere sustains columns that are too high at those latitudes. The simulations of emission line profiles in the millimeter spectral region support
Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.
2013-01-15
Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.
NASA Astrophysics Data System (ADS)
Pilorget, C.; Vincendon, M.; Poulet, F.
2013-12-01
A new radiative transfer model to simulate light scattering in a compact granular medium using a Monte‒Carlo approach is presented. The physical and compositional properties of the sample can be specified at the grain scale, thus allowing to simulate different kinds of heterogeneties/mixtures within the sample. The radiative transfer is then calculated using a ray tracing approach between the grains, and probabilistic physical parameters such as a single scattering albedo and a phase function at the grain level. The reflectance and the albedo can be computed at different scales and for different geometries: from the grain scale to the sample one. The photometric behavior of the model is validated by comparing the bidirectional reflectance obtained for various media and geometries with the one of semi‒infinite multilayer models, and a few first applications are presented. This model will be used to refine our understanding of visible/NIR remote sensing data of planetary surfaces, as well as future measurements of hyperspectral microscopes which may be able to resolve spatial compositional heterogeneities within a given sample.
Dong, Jing; Xiong, Wei; Chen, Yuancheng; Zhao, Yunfeng; Lu, Yang; Zhao, Di; Li, Wenyan; Liu, Yanhui; Chen, Xijing
2016-03-01
In this study, a population pharmacokinetic (PPK) model of biapenem in Chinese patients with lower respiratory tract infections (LRTIs) was developed and optimal dosage regimens based on Monte Carlo simulation were proposed. A total of 297 plasma samples from 124 Chinese patients were assayed chromatographically in a prospective, single-centre, open-label study, and pharmacokinetic parameters were analysed using NONMEN. Creatinine clearance (CLCr) was found to be the most significant covariate affecting drug clearance. The final PPK model was: CL (L/h)=9.89+(CLCr-66.56)×0.049; Vc (L)=13; Q (L/h)=8.74; and Vp (L)=4.09. Monte Carlo simulation indicated that for a target of ≥40% T>MIC (duration that the plasma level exceeds the causative pathogen's MIC), the biapenem pharmacokinetic/pharmacodynamic (PK/PD) breakpoint was 4μg/mL for doses of 0.3g every 6h (3-h infusion) and 1.2g (24-h continuous infusion). For a target of ≥80% T>MIC, the PK/PD breakpoint was 4μg/mL for a dose of 1.2g (24-h continuous infusion). The probability of target attainment (PTA) could not achieve ≥90% at the usual biapenem dosage regimen (0.3g every 12h, 0.5-h infusion) when the MIC of the pathogenic bacteria was 4μg/mL, which most likely resulted in unsatisfactory clinical outcomes in Chinese patients with LRTIs. Higher doses and longer infusion time would be appropriate for empirical therapy. When the patient's symptoms indicated a strong suspicion of Pseudomonas aeruginosa or Acinetobacter baumannii infection, it may be more appropriate for combination therapy with other antibacterial agents. PMID:26895604
Kameoka, S; Amako, K; Iwai, G; Murakami, K; Sasaki, T; Toshito, T; Yamashita, T; Aso, T; Kimura, A; Kanai, T; Kanematsu, N; Komori, M; Takei, Y; Yonai, S; Tashiro, M; Koikegami, H; Tomita, H; Koi, T
2008-07-01
We tested the ability of two separate nuclear reaction models, the binary cascade and JQMD (Jaeri version of Quantum Molecular Dynamics), to predict the dose distribution in carbon-ion radiotherapy. This was done by use of a realistic simulation of the experimental irradiation of a water target. Comparison with measurement shows that the binary cascade model does a good job reproducing the spread-out Bragg peak in depth-dose distributions in water irradiated with a 290 MeV/u (per nucleon) beam. However, it significantly overestimates the peak dose for a 400 MeV/u beam. JQMD underestimates the overall dose because of a tendency to break a nucleus into lower-Z fragments than does the binary cascade model. As far as shape of the dose distribution is concerned, JQMD shows fairly good agreement with measurement for both beam energies of 290 and 400 MeV/u, which favors JQMD over the binary cascade model for the calculation of the relative dose distribution in treatment planning. PMID:20821145
Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.
Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J
2007-10-21
A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in
Monte Carlo beam capture and charge breeding simulation
Kim, J.S.; Liu, C.; Edgell, D.H.; Pardo, R.
2006-03-15
A full six-dimensional (6D) phase space Monte Carlo beam capture charge-breeding simulation code examines the beam capture processes of singly charged ion beams injected to an electron cyclotron resonance (ECR) charge breeder from entry to exit. The code traces injected beam ions in an ECR ion source (ECRIS) plasma including Coulomb collisions, ionization, and charge exchange. The background ECRIS plasma is modeled within the current frame work of the generalized ECR ion source model. A simple sample case of an oxygen background plasma with an injected Ar +1 ion beam produces lower charge breeding efficiencies than experimentally obtained. Possible reasons for discrepancies are discussed.
Morphological evolution of growing crystals - A Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz
1988-01-01
The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.
2012-11-01
A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.
NASA Technical Reports Server (NTRS)
Combi, Michael R.
2004-01-01
In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic (MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important. At the University of Michigan we have an established base of experience and expertise in numerical simulations based on particle codes which address these physical regimes. The Principal Investigator, Dr. Michael Combi, has over 20 years of experience in the development of particle-kinetic and hybrid kinetichydrodynamics models and their direct use in data analysis. He has also worked in ground-based and space-based remote observational work and on spacecraft instrument teams. His research has involved studies of cometary atmospheres and ionospheres and their interaction with the solar wind, the neutral gas clouds escaping from Jupiter s moon Io, the interaction of the atmospheres/ionospheres of Io and Europa with Jupiter s corotating magnetosphere, as well as Earth s ionosphere. This report describes our progress during the year. The contained in section 2 of this report will serve as the basis of a paper describing the method and its application to the cometary coma that will be continued under a research and analysis grant that supports various applications of theoretical comet models to understanding the
Lou, K; Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y; Clark, J
2014-06-01
Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is
Cheng, Feng; Chen, Zhao-Xu
2016-02-01
Pd/ZnO is a promising catalyst studied for methanol steam reforming (MSR) and the 1 : 1 PdZn alloy is demonstrated to be the active component. It is believed that MSR starts from methanol dehydrogenation to methoxy. Previous studies of methanol dehydrogenation on the ideal PdZn(111) surface show that methanol adsorbs weakly on the PdZn(111) surface and it is hard for methanol to transform into methoxy because of the high dehydrogenation barrier, indicating that the catalyst model is not appropriate for investigating the first step of MSR. Using the model derived from our recent kinetic Monte Carlo simulations, we examined the process CH3OH → CH3O → CH2O → CHO → CO. Compared with the ideal model, methanol adsorbs much more strongly and the barrier from CH3OH → CH3O is much lower on the kMC model. On the other hand, the C-H bond breaking of CH3O, CH2O and CHO becomes harder. We show that co-adsorbed water is important for refreshing the active sites. The present study shows that the first MSR step most likely takes place on three-fold hollow sites formed by Zn atoms, and the inhomogeneity of the PdZn alloy may exert significant influences on reactions. PMID:26771029
Technology Transfer Automated Retrieval System (TEKTRAN)
A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...
Monte Carlo modeling and meteor showers
NASA Technical Reports Server (NTRS)
Kulikova, N. V.
1987-01-01
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.
Monte Carlo modeling and meteor showers
NASA Astrophysics Data System (ADS)
Kulikova, N. V.
1987-08-01
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.
Numerical simulations of acoustics problems using the direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Hanford, Amanda Danforth
In the current study, real gas effects in the propagation of sound waves are simulated using the direct simulation Monte Carlo method for a wide range of systems. This particle method allows for treatment of acoustic phenomena for a wide range of Knudsen numbers, defined as the ratio of molecular mean free path to wavelength. Continuum models such as the Euler and Navier-Stokes equations break down for flows greater than a Knudsen number of approximately 0.05. Continuum models also suffer from the inability to simultaneously model nonequilibrium conditions, diatomic or polyatomic molecules, nonlinearity and relaxation effects and are limited in their range of validity. Therefore, direct simulation Monte Carlo is capable of directly simulating acoustic waves with a level of detail not possible with continuum approaches. The basis of direct simulation Monte Carlo lies within kinetic theory where representative particles are followed as they move and collide with other particles. A parallel, object-oriented DSMC solver was developed for this problem. Despite excellent parallel efficiency, computation time is considerable. Monatomic gases, gases with internal energy, planetary environments, and amplitude effects spanning a large range of Knudsen number have all been modeled with the same method and compared to existing theory. With the direct simulation method, significant deviations from continuum predictions are observed for high Knudsen number flows.
McGrath, Matthew; Kuo, I-F W.; Ngouana, Brice F.; Ghogomu, Julius N.; Mundy, Christopher J.; Marenich, Aleksandr; Cramer, Christopher J.; Truhlar, Donald G.; Siepmann, Joern I.
2013-08-28
The free energy of solvation and dissociation of hydrogen chloride in water is calculated through a combined molecular simulation quantum chemical approach at four temperatures between T = 300 and 450 K. The free energy is first decomposed into the sum of two components: the Gibbs free energy of transfer of molecular HCl from the vapor to the aqueous liquid phase and the standard-state free energy of acid dissociation of HCl in aqueous solution. The former quantity is calculated using Gibbs ensemble Monte Carlo simulations using either Kohn-Sham density functional theory or a molecular mechanics force field to determine the system’s potential energy. The latter free energy contribution is computed using a continuum solvation model utilizing either experimental reference data or micro-solvated clusters. The predicted combined solvation and dissociation free energies agree very well with available experimental data. CJM was supported by the US Department of Energy,Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences & Biosciences. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.
Kern, Christoph
2016-01-01
This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.
NASA Astrophysics Data System (ADS)
Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael
2013-05-01
In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.
Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters
Brandt, D.; Asai, M.; Brink, P.L.; Cabrera, B.; Silva, E.do Couto e; Kelsey, M.; Leman, S.W.; McArthy, K.; Resch, R.; Wright, D.; Figueroa-Feliciano, E.; /MIT
2012-06-12
There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature, focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
NASA Astrophysics Data System (ADS)
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
Monte Carlo simulation of quantum Zeno effect in the brain
NASA Astrophysics Data System (ADS)
Georgiev, Danko
2015-12-01
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Monte Carlo simulation of scenario probability distributions
Glaser, R.
1996-10-23
Suppose a scenario of interest can be represented as a series of events. A final result R may be viewed then as the intersection of three events, A, B, and C. The probability of the result P(R) in this case is the product P(R) = P(A) P(B {vert_bar} A) P(C {vert_bar} A {intersection} B). An expert may be reluctant to estimate P(R) as a whole yet agree to supply his notions of the component probabilities in the form of prior distributions. Each component prior distribution may be viewed as the stochastic characterization of the expert`s uncertainty regarding the true value of the component probability. Mathematically, the component probabilities are treated as independent random variables and P(R) as their product; the induced prior distribution for P(R) is determined which characterizes the expert`s uncertainty regarding P(R). It may be both convenient and adequate to approximate the desired distribution by Monte Carlo simulation. Software has been written for this task that allows a variety of component priors that experts with good engineering judgment might feel comfortable with. The priors are mostly based on so-called likelihood classes. The software permits an expert to choose for a given component event probability one of six types of prior distributions, and the expert specifies the parameter value(s) for that prior. Each prior is unimodal. The expert essentially decides where the mode is, how the probability is distributed in the vicinity of the mode, and how rapidly it attenuates away. Limiting and degenerate applications allow the expert to be vague or precise.
Monte Carlo Simulations for Spinodal Decomposition
NASA Astrophysics Data System (ADS)
Sander, Evelyn; Wanner, Thomas
1999-06-01
This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, we are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u 0≡ μ in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u 0. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, we numerically compare these two mathematical approaches. In fact, we are able to synthesize the understanding we gain from our numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, we can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. Our approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter ɛ of the governing equation. We give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. We observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. We explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. We also describe the dynamics of these exceptional solutions.
Monte Carlo simulations for spinodal decomposition
Sander, E.; Wanner, T.
1999-06-01
This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, the authors are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u{sub 0} {equivalent_to} {mu} in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u{sub 0}. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, the authors numerically compare these two mathematical approaches. In fact, they are able to synthesize the understanding they gain from the numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, they can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. The approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter {var_epsilon} of the governing equation. The authors give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. They observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. They explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. They also describe the dynamics of these exceptional solutions.
Papadimitroulas, P; Kostou, T; Kagadis, G; Loudos, G
2015-06-15
Purpose: The purpose of the present study was to quantify, evaluate the impact of cardiac and respiratory motion on clinical nuclear imaging protocols. Common SPECT and scintigraphic scans are studied using Monte Carlo (MC) simulations, comparing the resulted images with and without motion. Methods: Realistic simulations were executed using the GATE toolkit and the XCAT anthropomorphic phantom as a reference model for human anatomy. Three different radiopharmaceuticals based on 99mTc were studied, namely 99mTc-MDP, 99mTc—N—DBODC and 99mTc—DTPA-aerosol for bone, myocardium and lung scanning respectively. The resolution of the phantom was set to 3.5 mm{sup 3}. The impact of the motion on spatial resolution was quantified using a sphere with 3.5 mm diameter and 10 separate time frames, in the ECAM modeled SPECT scanner. Finally, respiratory motion impact on resolution and imaging of lung lesions was investigated. The MLEM algorithm was used for data reconstruction, while the literature derived biodistributions of the pharmaceuticals were used as activity maps in the simulations. Results: FWHM was extracted for a static and a moving sphere which was ∼23 cm away from the entrance of the SPECT head. The difference in the FWHM was 20% between the two simulations. Profiles in thorax were compared in the case of bone scintigraphy, showing displacement and blurring of the bones when respiratory motion was inserted in the simulation. Large discrepancies were noticed in the case of myocardium imaging when cardiac motion was incorporated during the SPECT acquisition. Finally the borders of the lungs are blurred when respiratory motion is included resulting to a dislocation of ∼2.5 cm. Conclusion: As we move to individualized imaging and therapy procedures, quantitative and qualitative imaging is of high importance in nuclear diagnosis. MC simulations combined with anthropomorphic digital phantoms can provide an accurate tool for applications like motion correction
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
ERIC Educational Resources Information Center
Nylund, Karen L.; Asparouhov, Tihomir; Muthen, Bengt O.
2007-01-01
Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models' usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study…
DETERMINING UNCERTAINTY IN PHYSICAL PARAMETER MEASUREMENTS BY MONTE CARLO SIMULATION
A statistical approach, often called Monte Carlo Simulation, has been used to examine propagation of error with measurement of several parameters important in predicting environmental transport of chemicals. These parameters are vapor pressure, water solubility, octanol-water par...
James Webb Space Telescope (JWST) Stationkeeping Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra; Yu, Wayne
2014-01-01
The James Webb Space Telescope (JWST) will launch in 2018 into a Libration Point Orbit (LPO) around the Sun-EarthMoon (SEM) L2 point, with a planned mission lifetime of 11 years. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.
2014-01-01
The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
NASA Astrophysics Data System (ADS)
Obot, I. B.; Kaya, Savaş; Kaya, Cemal; Tüzün, Burak
2016-06-01
DFT and Monte Carlo simulation were performed on three Schiff bases namely, 4-(4-bromophenyl)-N‧-(4-methoxybenzylidene)thiazole-2-carbohydrazide (BMTC), 4-(4-bromophenyl)-N‧-(2,4-dimethoxybenzylidene)thiazole-2-carbohydrazide (BDTC), 4-(4-bromophenyl)-N‧-(4-hydroxybenzylidene)thiazole-2-carbohydrazide (BHTC) recently studied as corrosion inhibitor for steel in acid medium. Electronic parameters relevant to their inhibition activity such as EHOMO, ELUMO, Energy gap (ΔE), hardness (η), softness (σ), the absolute electronegativity (χ), proton affinity (PA) and nucleophilicity (ω) etc., were computed and discussed. Monte Carlo simulations were applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are in most cases in agreement with experimental results.
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2016-01-01
The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…
Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy
Tennant, Sarah J.; Burgess, Donna R.; Rybak, Jeffrey M.; Martin, Craig A.; Burgess, David S.
2015-01-01
Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA) of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs) from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets. PMID:27025644
Monte Carlo simulation of the terrestrial hydrogen exosphere
Hodges, R.R. Jr.
1994-12-01
Methods for Monte Carlo simulation of planetary exospheres have evolved from early work on the lunar atmosphere, where the regolith surface provides a well defined exobase. A major limitation of the successor simulations of the exospheres of Earth and Venus is the use of an exobase surface as an artifice to separate the collisional processes of the thermosphere from a collisionles exosphere. In this paper a new generalized approach to exosphere simulation is described, wherein the exobase is replaced by a barometric depletion of the major constitents of the thermosphere. Exospheric atoms in the thermosphere-exosphere transition region, and in the outer exosphere as well, travel in ballistic trajectories that are interrupted by collisons with the background gas, and by charge exchange interactions with ionospheric particles. The modified simulator has been applied to the terrestrial hydrogen exosphere problem, using velocity dependent differential cross sections to provide statistically correct collisional scattering in H-O and H-H(+) interactions. Global models are presented for both solstice and equinox over the effective solar cycle range of the F{sub 10.7} index (80 to 230). Simulation results show significant differences with previous terrestrial exosphere models, as well as with the H distributions of the MSIS-86 thermosphere model.
Monte Carlo simulation of the terrestrial hydrogen exosphere
NASA Technical Reports Server (NTRS)
Hodges, R. Richard, Jr.
1994-01-01
Methods for Monte Carlo simulation of planetary exospheres have evolved from early work on the lunar atmosphere, where the regolith surface provides a well defined exobase. A major limitation of the successor simulations of the exospheres of Earth and Venus is the use of an exobase surface as an artifice to separate the collisional processes of the thermosphere from a collisionles exosphere. In this paper a new generalized approach to exosphere simulation is described, wherein the exobase is replaced by a barometric depletion of the major constitents of the thermosphere. Exospheric atoms in the thermosphere-exosphere transition region, and in the outer exosphere as well, travel in ballistic trajectories that are interrupted by collisons with the background gas, and by charge exchange interactions with ionospheric particles. The modified simulator has been applied to the terrestrial hydrogen exosphere problem, using velocity dependent differential cross sections to provide statistically correct collisional scattering in H-O and H-H(+) interactions. Global models are presented for both solstice and equinox over the effective solar cycle range of the F(sub 10.7) index (80 to 230). Simulation results show significant differences with previous terrestrial exosphere models, as well as with the H distributions of the MSIS-86 thermosphere model.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
Monte Carlo simulation of the energy spectrum for CdZnTe Frisch grid detector
NASA Astrophysics Data System (ADS)
Xu, Zhaoli; Wang, Linjun; Min, Jiahua; Teng, Jianyong; Qin, Kaifeng; Hu, Dongni; Zhang, Jijun; Huang, Jian; Xia, Yiben
2009-07-01
In this paper, we use the Monte-Carlo method to study the reaction of the electron-hole pairs produced to randomicity and the statistics rule, according to the principal of the detector and the gamma ray track in the CdZnTe detector. The EGSnrc software based on Monte-Carlo method is used to simulate the process of carriers' transportation. The statistics rule greatly reflects the result in Monte Carlo simulation. Firstly, we use Ansys software to create a model of the object for Monte-Carlo simulation, which is the basis for our further Monte-Carlo research. During Ansys simulation, a columniform block is created, where the electrical and thermal properties of the materials for simulation use are established. Then, the charge collection efficiency through the statistical approach was calculated using the EGSnrc software. Furthermore, by considering the interaction mechanism of CdZnTe with gamma ray, several modules in the software are added into Monte Carlo simulation. Finally, the pulse height spectra with the response to gamma ray, are available from the simulation. The comparison between the simulation and the measurement result is indicated, which shows that the simulation experiment is reliable. The Frisch grid detector can get the responses more efficiently than other structure devices.
Zhang, Deqiang; Konecny, Robert; Baker, Nathan A.; McCammon, J. Andrew
2008-01-01
Although many viruses have been crystallized and the protein capsid structures have been determined by X-ray crystallography, the nucleic acids often can not be resolved. This is especially true for RNA viruses. The lack of information about the conformation of DNA/RNA greatly hinders our understanding of the assembly mechanism of various viruses. Here we combine a coarse-grain model and a Monte Carlo method to simulate the distribution of viral RNA inside the capsid of Cowpea Chlorotic Mottle Virus (CCMV). Our results show that there is very strong interaction between the N-terminal residues of the capsid proteins, which are highly positive-charged, and the viral RNA. Without these residues, the binding energy disfavors the binding of RNA by the capsid. The RNA forms a shell close to the capsid with the highest densities associated with the capsid dimers. These high-density regions are connected to each other in the shape of a continuous net of triangles. The overall icosahedral shape of the net overlaps with the capsid subunit icosahedral organization. Medium density of RNA is found under the pentamers of the capsid. These findings are consistent with experimental observations. PMID:15386271
Validation of the Monte Carlo simulator GATE for indium-111 imaging.
Assié, K; Gardin, I; Véra, P; Buvat, I
2005-07-01
Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions. PMID:15972984
NASA Astrophysics Data System (ADS)
Mumford, K. G.; Mustafa, N. A.; Gerhard, J.
2012-12-01
At many former industrial sites, nonaqueous phase liquid (NAPL) contamination presents a significant limitation to site closure and brownfield redevelopment. Achieving site closure means soil and/or groundwater remediation to a level at which the associated risk is reduced to an acceptable level. In some jurisdictions, this risk is evaluated at the site boundary even if the critical risk receptors are located in the surrounding community; the consequence may be a site left untreated because the remediation target is technically or economically impractical. The goal of this study was to explore the implications of assessing risk at the site boundary versus in the community and the factors that affect the differences between the two. Because the controlling risk pathway for many volatile organic compounds (VOCs) is the contamination of indoor air, risk assessment at the community scale requires simulation tools that can predict the transport of dissolved VOCs in groundwater followed by vapour intrusion into residential houses. Existing tools and research had focused on vapour intrusion only in the near vicinity of the source (i.e., scale of meters) and primarily at steady s tate. Therefore, this work developed a novel numerical simulator that coupled an established groundwater flow and contaminant transport model to a state-of-the-art vapor intrusion model, which enables the prediction of indoor air concentrations in response to an evolving groundwater plume at the community (i.e., kilometre) scale. In the first phase of this work, the extent of source zone remediation required to achieve regulatory compliance at the site boundary was compared to the extent required to achieve compliance at receptors in the community. The sensitivity of this difference to physicochemical properties of the contaminant and whether compliance was based on groundwater or indoor air risk receptors was evaluated. In the second phase of this work, the influence of heterogeneity on the
Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Campbell, L.; Purviance, J.
1992-01-01
A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.
ERIC Educational Resources Information Center
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Monte Carlo simulations of nanoscale focused neon ion beam sputtering.
Timilsina, Rajendra; Rack, Philip D
2013-12-13
A Monte Carlo simulation is developed to model the physical sputtering of aluminum and tungsten emulating nanoscale focused helium and neon ion beam etching from the gas field ion microscope. Neon beams with different beam energies (0.5-30 keV) and a constant beam diameter (Gaussian with full-width-at-half-maximum of 1 nm) were simulated to elucidate the nanostructure evolution during the physical sputtering of nanoscale high aspect ratio features. The aspect ratio and sputter yield vary with the ion species and beam energy for a constant beam diameter and are related to the distribution of the nuclear energy loss. Neon ions have a larger sputter yield than the helium ions due to their larger mass and consequently larger nuclear energy loss relative to helium. Quantitative information such as the sputtering yields, the energy-dependent aspect ratios and resolution-limiting effects are discussed. PMID:24231648
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
Monte Carlo simulation of a cobalt-60 beam
Han, K.; Ballon, D.; Chui, C.; Mohan, R.
1987-05-01
We have used the Stanford Electron Gamma Shower (EGS) Monte Carlo code to compute photon spectra from an AECL Theratron 780 cobalt-60 unit. Particular attention has been paid to the careful modeling of the geometry and material construction of the cobalt-60 source capsule, source housing, and collimator assembly. From our simulation, we conclude that the observed increase in output of the machine with increasing field size is caused by scattered photons from the primary definer and the adjustable collimator. We have also used the generated photon spectra as input to a pencil beam model to calculate the tissue--air ratios in water and compared it to a model which uses a monochromatic photon energy of 1.25 MeV.
NASA Astrophysics Data System (ADS)
Tyagi, N.; Curran, B. H.; Roberson, P. L.; Moran, J. M.; Acosta, E.; Fraass, B. A.
2008-02-01
IMRT often requires delivering small fields which may suffer from electronic disequilibrium effects. The presence of heterogeneities, particularly low-density tissues in patients, complicates such situations. In this study, we report on verification of the DPM MC code for IMRT treatment planning in heterogeneous media, using a previously developed model of the Varian 120-leaf MLC. The purpose of this study is twofold: (a) design a comprehensive list of experiments in heterogeneous media for verification of any dose calculation algorithm and (b) verify our MLC model in these heterogeneous type geometries that mimic an actual patient geometry for IMRT treatment. The measurements have been done using an IMRT head and neck phantom (CIRS phantom) and slab phantom geometries. Verification of the MLC model has been carried out using point doses measured with an A14 slim line (SL) ion chamber inside a tissue-equivalent and a bone-equivalent material using the CIRS phantom. Planar doses using lung and bone equivalent slabs have been measured and compared using EDR films (Kodak, Rochester, NY).
Absolute dose calculations for Monte Carlo simulations of radiotherapy beams.
Popescu, I A; Shaw, C P; Zavgorodni, S F; Beckham, W A
2005-07-21
Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 x 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 x 10(13) +/- 1.0% electrons incident on the target and a total dose of 20.87 cGy +/- 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations. PMID:16177516
Absolute dose calculations for Monte Carlo simulations of radiotherapy beams
NASA Astrophysics Data System (ADS)
Popescu, I. A.; Shaw, C. P.; Zavgorodni, S. F.; Beckham, W. A.
2005-07-01
Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 × 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 × 1013 ± 1.0% electrons incident on the target and a total dose of 20.87 cGy ± 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Rikvold, P. A.
2015-01-01
The Ziff-Gulari-Barshad (ZGB) model, a simplified description of the oxidation of carbon monoxide (CO) on a catalyst surface, is widely used to study properties of nonequilibrium phase transitions. In particular, it exhibits a nonequilibrium, discontinuous transition between a reactive and a CO poisoned phase. If one allows a nonzero rate of CO desorption (k ), the line of phase transitions terminates at a critical point (kc). In this work, instead of restricting the CO and atomic oxygen (O) to react to form carbon dioxide (CO2) only when they are adsorbed in close proximity, we consider a modified model that includes an adjustable probability for adsorbed CO and O atoms located far apart on the lattice to react. We employ large-scale Monte Carlo simulations for system sizes up to 240 ×240 lattice sites, using the crossing of fourth-order cumulants to study the critical properties of this system. We find that the nonequilibrium critical point changes from the two-dimensional Ising universality class to the mean-field universality class upon introducing even a weak long-range reactivity mechanism. This conclusion is supported by measurements of cumulant fixed-point values, cluster percolation probabilities, correlation-length finite-size scaling properties, and the critical exponent ratio β /ν . The observed behavior is consistent with that of the equilibrium Ising ferromagnet with additional weak long-range interactions [T. Nakada, P. A. Rikvold, T. Mori, M. Nishino, and S. Miyashita, Phys. Rev. B 84, 054433 (2011), 10.1103/PhysRevB.84.054433]. The large system sizes and the use of fourth-order cumulants also enable determination with improved accuracy of the critical point of the original ZGB model with CO desorption.
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
NASA Astrophysics Data System (ADS)
Dunne, Lawrence J.; Manos, George; Rekabi, Mahdi
2009-01-01
Adsorption of xenon in carbon nanotubes has been investigated by Kuznetsova et al. [A. Kuznetsova, J.T. Yates Jr., J. Liu, R.E. Smalley, J. Chem. Phys. 112 (2000) 9590] and Simonyan et al. [V. Simonyan, J.K. Johnson, A Kuznetsova, J.T. Yates Jr., J. Chem. Phys. 114 (2001) 4180] where endohedral adsorption isotherms show a step-like structure. A matrix method is used for calculation of the statistical mechanics of a lattice model of xenon endohedral adsorption which reproduces the isotherm structure while exohedral adsorption is treated by mean-field theory.
Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina
NASA Astrophysics Data System (ADS)
Chen, Xiaoyan; Lane, Stephen
2010-02-01
We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Heermann, Dieter; Farmer, Barry; Pandey, Ras
2011-03-01
A coarse-grained model is used to study the self-assembly of active sites in a DNA (chromatin) chain. The chromosome is described by a bond-fluctuating chain of two types of nodes A (interacting) and B (non-interacting), distributed randomly with concentration C and 1 - C respectively. Active nodes interact with a Lennard-Jones (LJ) potential and execute their stochastic motion with the Metropolis algorithm. The depth of the LJ potential (f) , a measure of interaction strength and the concentration (C) of the active sites are varied. A number of local and global physical quantities are studied such as mobility (Mn) profile of each node, their local structural profile, root mean square (RMS) displacement (R) , radius of gyration (Rg) , and structure factor S (q) . We find that the chain segments assemble into microphase of blobs which requires higher concentration of active sites at weaker interaction. These findings are consistent with that of a dynamic loop model of chromatin on global (large) scale but differ at small scales. This work is supported in part by the Alexander von Humboldt foundation and AFRL.
Monte Carlo modelling of TRIGA research reactor
NASA Astrophysics Data System (ADS)
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Praveen, E. Satyanarayana, S. V. M.
2014-04-24
Traditional definition of phase transition involves an infinitely large system in thermodynamic limit. Finite systems such as biological proteins exhibit cooperative behavior similar to phase transitions. We employ recently discovered analysis of inflection points of microcanonical entropy to estimate the transition temperature of the phase transition in q state Potts model on a finite two dimensional square lattice for q=3 (second order) and q=8 (first order). The difference of energy density of states (DOS) Δ ln g(E) = ln g(E+ ΔE) −ln g(E) exhibits a point of inflexion at a value corresponding to inverse transition temperature. This feature is common to systems exhibiting both first as well as second order transitions. While the difference of DOS registers a monotonic variation around the point of inflexion for systems exhibiting second order transition, it has an S-shape with a minimum and maximum around the point of inflexion for the case of first order transition.
Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model
D.P. Stotler
2005-06-09
The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway. PMID:25883312
Monte Carlo simulations of solid-state photoswitches
Rambo, P.W.; Denavit, J.
1995-09-01
Large increases in conductivity induced in GaAs and other semiconductors by photoionization allow fast switching by laser light with applications to pulse-power technology and microwave generation. Experiments have shown that under high-field conditions (10 to 50 kV/cm), conductivity may occur either in the linear mode where it is proportional to the absorbed light, in the {open_quotes}lock-on{close_quotes} mode, where it persists after termination of the laser pulse or in the avalanche mode where multiple carriers are generated. We have assembled a self-consistent Monte Carlo code to study these phenomena and in particular to model hot electron effects, which are expected to be important at high field strengths. This project has also brought our expertise acquired in advanced particle simulation of plasmas to bear on the modeling of semiconductor devices, which has broad industrial applications.
Monte Carlo simulation of helical tomotherapy with PENELOPE
NASA Astrophysics Data System (ADS)
Sterpin, E.; Salvat, F.; Cravens, R.; Ruchala, K.; Olivera, G. H.; Vynckier, S.
2008-04-01
Helical tomotherapy (HT) delivers intensity-modulated radiation therapy (IMRT) using the simultaneous movement of the couch, the gantry and the binary multileaf collimator (MLC), a procedure that differs from conventional dynamic or step-and-shoot IMRT. A Monte Carlo (MC) simulation of HT in the helical mode therefore requires a new approach. Using validated phase-space files (PSFs) obtained through the MC simulation of the static mode with PENELOPE, an analytical model of the binary MLC, called the 'transfer function' (TF), was first devised to perform the transport of particles through the MLC much faster than time-consuming MC simulation and with no significant loss of accuracy. Second, a new tool, called TomoPen, was designed to simulate the helical mode by rotating and translating the initial coordinates and directions of the particles in the PSF according to the instantaneous position of the machine, transporting the particles through the MLC (in the instantaneous configuration defined by the sinogram), and computing the dose distribution in the CT structure using PENELOPE. Good agreement with measurements and with the treatment planning system of tomotherapy was obtained, with deviations generally well within 2%/1 mm, for the simulation of the helical mode for two commissioning procedures and a clinical plan calculated and measured in homogeneous conditions.
Simulation of ultracold plasmas using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Vrinceanu, D.; Balaraman, G. S.
2010-03-01
After creation of the ultracold plasma, the system is far from equilibrium. The electrons equilibrate among themselves and achieve local-thermal equilibrium on a time scale of few nano-seconds. The ions on the other hand expand radially due to the thermal pressure exerted by the electrons, on a much slower time scale (microseconds). Molecular dynamics simulation can be used to study the expansion and equilibration of ultracold plasmas, however a full micro second simulation are computationally exorbitant. We propose a novel method using Monte Carlo method for simulating long timescale dynamics of a spherically symmetric ultracold plasma cloud [1]. Results from our method for the expansion of ion plasma size, and electron density distributions show good agreement with the molecular dynamics simulations. Our results for the collisionless plasma are in good agreement with the Vlasov equation. Our method is very computationally efficient, and takes a few minutes on a desktop to simulate tens of nanoseconds of dynamics of millions of particles. [4pt] [1] D. Vrinceanu, G. S. Balaraman and L. Collins, ``The King model for electrons in a finite-size ultracold plasma,'' J. Phys. A, 41 425501 (2008)
Kinetic Monte Carlo Simulations of Void Lattice Formation During Irradiation
Heinisch, Howard L.; Singh, Bachu N.
2003-12-01
Within the last decade molecular dynamics simulations of displacement cascades have revealed that glissile clusters of self-interstitial crowdions are formed directly in cascades and that they migrate one-dimensionally along close-packed directions with extremely low activation energies. Occasionally, under various conditions, a crowdion cluster can change its Burgers vector and glide along a different close-packed direction. The recently developed Production Bias Model (PBM) of microstructure evolution under irradiation has been structured to specifically take into account the unique properties of the vacancy and interstitial clusters produced in the cascades. Atomic-scale kinetic Monte Carlo (KMC) simulations have played a useful role in understanding the defect reaction kinetics of one-dimensionally migrating crowdion clusters as a function of the frequency of direction changes. This has made it possible to incorporate the migration properties of crowdion clusters and changes in reaction kinetics into the PBM. In the present paper we utilize similar KMC simulations to investigate the significant role crowdion clusters can play in the formation and stability of void lattices. The creation of stable void lattices, starting from a random distribution of voids, is simulated by a KMC model in which vacancies migrate three-dimensionally and SIA clusters migrate one-dimensionally, interrupted by directional changes. The necessity of both one-dimensional migration and Burgers vectors changes of SIA clusters for the production of stable void lattices is demonstrated, and the effects of the frequency of Burgers vector changes are described.
Monte Carlo simulation of the spear reflectometer at LANSCE
Smith, G.S.
1995-12-31
The Monte Carlo instrument simulation code, MCLIB, contains elements to represent several components found in neutron spectrometers including slits, choppers, detectors, sources and various samples. Using these elements to represent the components of a neutron scattering instrument, one can simulate, for example, an inelastic spectrometer, a small angle scattering machine, or a reflectometer. In order to benchmark the code, we chose to compare simulated data from the MCLIB code with an actual experiment performed on the SPEAR reflectometer at LANSCE. This was done by first fitting an actual SPEAR data set to obtain the model scattering-length-density profile, {Beta}(z), for the sample and the substrate. Then these parameters were used as input values for the sample scattering function. A simplified model of SPEAR was chosen which contained all of the essential components of the instrument. A code containing the MCLIB subroutines was then written to simulate this simplified instrument. The resulting data was then fit and compared to the actual data set in terms of the statistics, resolution and accuracy.
Monte Carlo simulation in statistical physics: an introduction
NASA Astrophysics Data System (ADS)
Binder, K., Heermann, D. W.
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.
Lindoy, Lachlan P.; Kolmann, Stephen J.; D’Arcy, Jordan H.; Jordan, Meredith J. T.; Crittenden, Deborah L.
2015-11-21
Finite temperature quantum and anharmonic effects are studied in H{sub 2}–Li{sup +}-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H{sub 2}. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H{sub 2} molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔU{sub ads}, and enthalpy, ΔH{sub ads}, for H{sub 2} adsorption onto Li{sup +}-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H{sub 2}–Li{sup +}-benzene are the “helicopter” and “ferris wheel” H{sub 2} rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔU{sub ads} and ΔH{sub ads} are −13.3 ± 0.1 and −14.5 ± 0.1 kJ mol{sup −1}, respectively.
NASA Astrophysics Data System (ADS)
Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.
2016-03-01
Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.
Monte Carlo simulation of radiation streaming from a radioactive material shipping cask
Liu, Y.Y.; Schwarz, R.A.; Tang, J.S.
1996-04-01
Simulated detection of gamma radiation streaming from a radioactive material shipping cask have been performed with the Monte Carlo codes MCNP4A and MORSE-SGC/S. Despite inherent difficulties in simulating deep penetration of radiation and streaming, the simulations have yielded results that agree within one order of magnitude with the radiation survey data, with reasonable statistics. These simulations have also provided insight into modeling radiation detection, notably on location and orientation of the radiation detector with respect to photon streaming paths, and on techniques used to reduce variance in the Monte Carlo calculations. 13 refs., 4 figs., 2 tabs.
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
NASA Astrophysics Data System (ADS)
Liu, Zhirong; Chan, Hue Sun
2008-04-01
We develop two classes of Monte Carlo moves for efficient sampling of wormlike DNA chains that can have significant degrees of supercoiling, a conformational feature that is key to many aspects of biological function including replication, transcription, and recombination. One class of moves entails reversing the coordinates of a segment of the chain along one, two, or three axes of an appropriately chosen local frame of reference. These transformations may be viewed as a generalization, to the continuum, of the Madras-Orlitsky-Shepp algorithm for cubic lattices. Another class of moves, termed T±2, allows for interconversions between chains with different lengths by adding or subtracting two beads (monomer units) to or from the chain. Length-changing moves are generally useful for conformational sampling with a given site juxtaposition, as has been shown in previous lattice studies. Here, the continuum T±2 moves are designed to enhance their acceptance rate in supercoiled conformations. We apply these moves to a wormlike model in which excluded volume is accounted for by a bond-bond repulsion term. The computed autocorrelation functions for the relaxation of bond length, bond angle, writhe, and branch number indicate that the new moves lead to significantly more efficient sampling than conventional bead displacements and crankshaft rotations. A close correspondence is found in the equilibrium ensemble between the map of writhe computed for pair of chain segments and the map of site juxtapositions or self-contacts. To evaluate the more coarse-grained freely jointed chain (random-flight) and cubic lattice models that are commonly used in DNA investigations, twisting (torsional) potentials are introduced into these models. Conformational properties for a given superhelical density σ may then be sampled by computing the writhe and using White's formula to relate the degree of twisting to writhe and σ. Extensive comparisons of contact patterns and knot probabilities
Liu, Zhirong; Chan, Hue Sun
2008-04-14
We develop two classes of Monte Carlo moves for efficient sampling of wormlike DNA chains that can have significant degrees of supercoiling, a conformational feature that is key to many aspects of biological function including replication, transcription, and recombination. One class of moves entails reversing the coordinates of a segment of the chain along one, two, or three axes of an appropriately chosen local frame of reference. These transformations may be viewed as a generalization, to the continuum, of the Madras-Orlitsky-Shepp algorithm for cubic lattices. Another class of moves, termed T+/-2, allows for interconversions between chains with different lengths by adding or subtracting two beads (monomer units) to or from the chain. Length-changing moves are generally useful for conformational sampling with a given site juxtaposition, as has been shown in previous lattice studies. Here, the continuum T+/-2 moves are designed to enhance their acceptance rate in supercoiled conformations. We apply these moves to a wormlike model in which excluded volume is accounted for by a bond-bond repulsion term. The computed autocorrelation functions for the relaxation of bond length, bond angle, writhe, and branch number indicate that the new moves lead to significantly more efficient sampling than conventional bead displacements and crankshaft rotations. A close correspondence is found in the equilibrium ensemble between the map of writhe computed for pair of chain segments and the map of site juxtapositions or self-contacts. To evaluate the more coarse-grained freely jointed chain (random-flight) and cubic lattice models that are commonly used in DNA investigations, twisting (torsional) potentials are introduced into these models. Conformational properties for a given superhelical density sigma may then be sampled by computing the writhe and using White's formula to relate the degree of twisting to writhe and sigma. Extensive comparisons of contact patterns and knot
Treatment planning for a small animal using Monte Carlo simulation
Chow, James C. L.; Leung, Michael K. K.
2007-12-15
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Learning About Ares I from Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Hanson, John M.; Hall, Charlie E.
2008-01-01
This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.
Quantum Monte Carlo simulations with tensor-network states
NASA Astrophysics Data System (ADS)
Song, Jeong Pil; Clay, R. T.
2011-03-01
Matrix-product states, generated by the density-matrix renormalization group method, are among the most powerful methods for simulation of quasi-one dimensional quantum systems. Direct application of a matrix-product state representation fails for two dimensional systems, although a number of tensor-network states have been proposed to generalize the concept for two dimensions. We introduce a useful approximate method replacing a 4-index tensor by two matrices in order to contract tensors in two dimensions. We use this formalism as a basis for variational quantum Monte Carlo, optimizing the matrix elements stochastically. We present results on a two dimensional spinless fermion model including nearest- neighbor Coulomb interactions, and determine the critical Coulomb interaction for the charge density wave state by finite size scaling. This work was supported by the Department of Energy grant DE-FG02-06ER46315.
Monte Carlo simulations of random non-commutative geometries
NASA Astrophysics Data System (ADS)
Barrett, John W.; Glaser, Lisa
2016-06-01
Random non-commutative geometries are introduced by integrating over the space of Dirac operators that form a spectral triple with a fixed algebra and Hilbert space. The cases with the simplest types of Clifford algebra are investigated using Monte Carlo simulations to compute the integrals. Various qualitatively different types of behaviour of these random Dirac operators are exhibited. Some features are explained in terms of the theory of random matrices but other phenomena remain mysterious. Some of the models with a quartic action of symmetry-breaking type display a phase transition. Close to the phase transition the spectrum of a typical Dirac operator shows manifold-like behaviour for the eigenvalues below a cut-off scale.
Monte Carlo Simulations of Background Spectra in Integral Imager Detectors
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.
1998-01-01
Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Monte Carlo simulation of photon way in clinical laser therapy
NASA Astrophysics Data System (ADS)
Ionita, Iulian; Voitcu, Gabriel
2011-07-01
The multiple scattering of light can increase efficiency of laser therapy of inflammatory diseases enlarging the treated area. The light absorption is essential for treatment while scattering dominates. Multiple scattering effects must be introduced using the Monte Carlo method for modeling light transport in tissue and finally to calculate the optical parameters. Diffuse reflectance measurements were made on high concentrated live leukocyte suspensions in similar conditions as in-vivo measurements. The results were compared with the values determined by MC calculations, and the latter have been adjusted to match the specified values of diffuse reflectance. The principal idea of MC simulations applied to absorption and scattering phenomena is to follow the optical path of a photon through the turbid medium. The concentrated live cell solution is a compromise between homogeneous layer as in MC model and light-live cell interaction as in-vivo experiments. In this way MC simulation allow us to compute the absorption coefficient. The values of optical parameters, derived from simulation by best fitting of measured reflectance, were used to determine the effective cross section. Thus we can compute the absorbed radiation dose at cellular level.
Dynamic Monte Carlo simulation for highly efficient polymer blend photovoltaics.
Meng, Lingyi; Shang, Yuan; Li, Qikai; Li, Yongfang; Zhan, Xiaowei; Shuai, Zhigang; Kimber, Robin G E; Walker, Alison B
2010-01-14
We developed a model system for blend polymers with electron-donating and -accepting compounds. It is found that the optimal energy conversion efficiency can be achieved when the feature size is around 10 nm. The first reaction method is used to describe the key processes (e.g., the generation, the diffusion, the dissociation at the interface for the excitons, the drift, the injection from the electrodes, and the collection by the electrodes for the charge carries) in the organic solar cell by the dynamic Monte Carlo simulation. Our simulations indicate that a 5% power conversion efficiency (PCE) is reachable with an optimum combination of charge mobility and morphology. The parameters used in this model study correspond to a blend of novel polymers (bis(thienylenevinylene)-substituted polythiophene and poly(perylene diimide-alt-dithienothiophene)), which features a broad absorption and a high mobility. The I-V curves are well-reproduced by our simulations, and the PCE for the polymer blend can reach up to 2.2%, which is higher than the experimental value (>1%), one of the best available experimental results up to now for the all-polymer solar cells. In addition, the dependency of PCE on the charge mobility and the material structure are also investigated. PMID:20000370
Cai, Zhongli; Chattopadhyay, Niladri; Kwon, Yongkyu Luke; Pignol, Jean-Philippe; Lechtman, Eli; Reilly, Raymond M.
2013-11-15
Purpose: The authors’ aims were to model how various factors influence radiation dose enhancement by gold nanoparticles (AuNPs) and to propose a new modeling approach to the dose enhancement factor (DEF).Methods: The authors used Monte Carlo N-particle (MCNP 5) computer code to simulate photon and electron transport in cells. The authors modeled human breast cancer cells as a single cell, a monolayer, or a cluster of cells. Different numbers of 5, 30, or 50 nm AuNPs were placed in the extracellular space, on the cell surface, in the cytoplasm, or in the nucleus. Photon sources examined in the simulation included nine monoenergetic x-rays (10–100 keV), an x-ray beam (100 kVp), and {sup 125}I and {sup 103}Pd brachytherapy seeds. Both nuclear and cellular dose enhancement factors (NDEFs, CDEFs) were calculated. The ability of these metrics to predict the experimental DEF based on the clonogenic survival of MDA-MB-361 human breast cancer cells exposed to AuNPs and x-rays were compared.Results: NDEFs show a strong dependence on photon energies with peaks at 15, 30/40, and 90 keV. Cell model and subcellular location of AuNPs influence the peak position and value of NDEF. NDEFs decrease in the order of AuNPs in the nucleus, cytoplasm, cell membrane, and extracellular space. NDEFs also decrease in the order of AuNPs in a cell cluster, monolayer, and single cell if the photon energy is larger than 20 keV. NDEFs depend linearly on the number of AuNPs per cell. Similar trends were observed for CDEFs. NDEFs using the monolayer cell model were more predictive than either single cell or cluster cell models of the DEFs experimentally derived from the clonogenic survival of cells cultured as a monolayer. The amount of AuNPs required to double the prescribed dose in terms of mg Au/g tissue decreases as the size of AuNPs increases, especially when AuNPs are in the nucleus and the cytoplasm. For 40 keV x-rays and a cluster of cells, to double the prescribed x-ray dose (NDEF = 2
Relation between gamma-ray family and EAS core: Monte-Carlo simulation of EAS core
NASA Technical Reports Server (NTRS)
Yanagita, T.
1985-01-01
Preliminary results of Monte-Carlo simulation on Extensive Air Showers (EAS) (Ne=100,000) core is reported. For the first collision at the top of the atmosphere, high multiplicity (high rapidity, density) and a large Pt (1.5GeV average) model is assumed. Most of the simulated cores show a complicated structure.
The factorization method for Monte Carlo simulations of systems with a complex with
NASA Astrophysics Data System (ADS)
Ambjørn, J.; Anagnostopoulos, K. N.; Nishimura, J.; Verbaarschot, J. J. M.
2004-03-01
We propose a method for Monte Carlo simulations of systems with a complex action. The method has the advantages of being in principle applicable to any such system and provides a solution to the overlap problem. In some cases, like in the IKKT matrix model, a finite size scaling extrapolation can provide results for systems whose size would make it prohibitive to simulate directly.
Monte Carlo Simulation of Callisto's Exosphere
NASA Astrophysics Data System (ADS)
Vorburger, Audrey; Wurz, Peter; Galli, André; Mousis, Olivier; Barabash, Stas; Lammer, Helmut
2014-05-01
to the surface the sublimated particles dominate the day-side exosphere, however, their density profiles (with the exception of H and H2) decrease much more rapidly with altitude than those of the sputtered particles, thus, the latter particles start to dominate at altitudes above ~1000 km. Since the JUICE flybys are as low as 200 km above Callisto's surface, NIM is expected to register both the sublimated as well as sputtered particle populations. Our simulations show that NIM's sensitivity is high enough to allow the detection of particles sputtered from the icy as well as the mineral surfaces, and to distinguish between the different composition models.
Toghraee, Reza; Lee, Kyu-Il; Papke, David; Chiu, See-Wing; Jakobsson, Eric; Ravaioli, Umberto
2009-01-01
Ion channels, as natures’ solution to regulating biological environments, are particularly interesting to device engineers seeking to understand how natural molecular systems realize device-like functions, such as stochastic sensing of organic analytes. What’s more, attaching molecular adaptors in desired orientations inside genetically engineered ion channels, enhances the system functionality as a biosensor. In general, a hierarchy of simulation methodologies is needed to study different aspects of a biological system like ion channels. Biology Monte Carlo (BioMOCA), a three-dimensional coarse-grained particle ion channel simulator, offers a powerful and general approach to study ion channel permeation. BioMOCA is based on the Boltzmann Transport Monte Carlo (BTMC) and Particle-Particle-Particle-Mesh (P3M) methodologies developed at the University of Illinois at Urbana-Champaign. In this paper, we have employed BioMOCA to study two engineered mutations of α-HL, namely (M113F)6(M113C-D8RL2)1-β-CD and (M113N)6(T117C-D8RL3)1-β-CD. The channel conductance calculated by BioMOCA is slightly higher than experimental values. Permanent charge distributions and the geometrical shape of the channels gives rise to selectivity towards anions and also an asymmetry in I-V curves, promoting a rectification largely for cations. PMID:20938493
Gentile, N A
2000-10-01
We present a method for accelerating time dependent Monte Carlo radiative transfer calculations by using a discretization of the diffusion equation to calculate probabilities that are used to advance particles in regions with small mean free path. The method is demonstrated on problems with on 1 and 2 dimensional orthogonal grids. It results in decreases in run time of more than an order of magnitude on these problems, while producing answers with accuracy comparable to pure IMC simulations. We call the method Implicit Monte Carlo Diffusion, which we abbreviate IMD.
Catfish: A Monte Carlo simulator for black holes at the LHC
NASA Astrophysics Data System (ADS)
Cavaglià, M.; Godang, R.; Cremaldi, L.; Summers, D.
2007-09-01
We present a new Fortran Monte Carlo generator to simulate black hole events at CERN's Large Hadron Collider. The generator interfaces to the PYTHIA Monte Carlo fragmentation code. The physics of the BH generator includes, but not limited to, inelasticity effects, exact field emissivities, corrections to semiclassical black hole evaporation and gravitational energy loss at formation. These features are essential to realistically reconstruct the detector response and test different models of black hole formation and decay at the LHC.
Monte Carlo Simulations of Cosmic Rays Hadronic Interactions
Aguayo Navarrete, Estanislao; Orrell, John L.; Kouzes, Richard T.
2011-04-01
This document describes the construction and results of the MaCoR software tool, developed to model the hadronic interactions of cosmic rays with different geometries of materials. The ubiquity of cosmic radiation in the environment results in the activation of stable isotopes, referred to as cosmogenic activities. The objective is to use this application in conjunction with a model of the MAJORANA DEMONSTRATOR components, from extraction to deployment, to evaluate cosmogenic activation of such components before and after deployment. The cosmic ray showers include several types of particles with a wide range of energy (MeV to GeV). It is infeasible to compute an exact result with a deterministic algorithm for this problem; Monte Carlo simulations are a more suitable approach to model cosmic ray hadronic interactions. In order to validate the results generated by the application, a test comparing experimental muon flux measurements and those predicted by the application is presented. The experimental and simulated results have a deviation of 3%.
Monte Carlo simulations of ionization potential depression in dense plasmas
NASA Astrophysics Data System (ADS)
Stransky, M.
2016-01-01
A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.
NASA Technical Reports Server (NTRS)
Queen, Eric M.; Omara, Thomas M.
1990-01-01
A realization of a stochastic atmosphere model for use in simulations is presented. The model provides pressure, density, temperature, and wind velocity as a function of latitude, longitude, and altitude, and is implemented in a three degree of freedom simulation package. This implementation is used in the Monte Carlo simulation of an aeroassisted orbital transfer maneuver and results are compared to those of a more traditional approach.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media
NASA Astrophysics Data System (ADS)
Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu
2016-03-01
Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.
NASA Astrophysics Data System (ADS)
Wang, Ping; Yuan, Hongwu; Mei, Haiping; Zhang, Qianghua
2013-08-01
Study the laser pulses transmission time characteristics in discrete random medium using the Monte Carlo method. Firstly, the medium optical parameters have been given by OPAC software. Then, create a Monte Carlo model and Monte Carlo simulation of photon transport behavior of a large number of tracking, statistics obtain the photon average arrival time and average pulse broadening case, the calculation result with calculation results of two-frequency mutual coherence function are compared, the results are very consistent. Finally, medium impulse response function given by polynomial fitting method can be used to correct discrete random medium inter-symbol interference in optical communications and reduce the rate of system error.
Monte Carlo code for high spatial resolution ocean color simulations.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito; Cunha, José C
2010-09-10
A Monte Carlo code for ocean color simulations has been developed to model in-water radiometric fields of downward and upward irradiance (E(d) and E(u)), and upwelling radiance (L(u)) in a two-dimensional domain with a high spatial resolution. The efficiency of the code has been optimized by applying state-of-the-art computing solutions, while the accuracy of simulation results has been quantified through benchmark with the widely used Hydrolight code for various values of seawater inherent optical properties and different illumination conditions. Considering a seawater single scattering albedo of 0.9, as well as surface waves of 5 m width and 0.5 m height, the study has shown that the number of photons required to quantify uncertainties induced by wave focusing effects on E(d), E(u), and L(u) data products is of the order of 10(6), 10(9), and 10(10), respectively. On this basis, the effects of sea-surface geometries on radiometric quantities have been investigated for different surface gravity waves. Data products from simulated radiometric profiles have finally been analyzed as a function of the deployment speed and sampling frequency of current free-fall systems in view of providing recommendations to improve measurement protocols. PMID:20830183
A Monte Carlo simulation study of branched polymers.
Yethiraj, Arun
2006-11-28
Monte Carlo simulations are presented for the static properties of highly branched polymer molecules. The molecules consist of a semiflexible backbone of hard-sphere monomers with semiflexible side chains, also composed of hard-sphere monomers, attached to either every backbone bead or every other backbone bead. The conformational properties and structure factor of this model are investigated as a function of the stiffness of the backbone and side chains. The average conformations of the side chains are similar to self-avoiding random walks. The simulations show that there is a stiffening of the backbone as degree of crowding is increased, for example, if the branch spacing is decreased or side chain length is increased. The persistence length of the backbone is relatively insensitive to the stiffness of the side chains over the range investigated. The simulations reproduce most of the qualitative features of the structure factor observed in experiment, although the magnitude of the stiffening of the backbone is smaller than in experiment. PMID:17144734
Komorowska, K.; Pawlik, G.; Mitus, A. C.; Miniewicz, A.
2001-08-15
In this article we compare results of experiments on light self-diffraction in nematic liquid crystal panels with corresponding results of the Monte-Carlo simulations of a two-dimensional nematic liquid crystal model in the presence of a spatially modulated electric field. In the simulations molecular interactions were described by the Lebwohl--Lasher Hamiltonian. The results obtained on the diffraction efficiency and spatial and temporal behavior of refractive index changes in nematic liquid crystal are satisfactorily reproduced by Monte-Carlo simulations. We discuss the complementarity of both methods in studying and designing systems for optical information processing using liquid crystals. {copyright} 2001 American Institute of Physics.
Self-Consistent Monte Carlo Simulations of Positive Column Discharges
NASA Astrophysics Data System (ADS)
Lawler, J. E.; Kortshagen, U.
1998-10-01
In recent years it has become widely recognized that electron distribution functions in atomic gas positive column discharges are best described as non local over most of the range of R× N (column radius × gas density) where positive columns are stable. The use of an efficient Monte Carlo code with a radial potential expansion in powers of r^2 and with judiciously chosen constraints on the potential near the axis and wall now provides fully self-consistent kinetic solutions using only small computers. A set of solutions at smaller R× N and lower currents are presented which exhibit the classic negative dynamic resistance of the positive column at low currents. The negative dynamic resistance is due to a non-negligible Debye length and is sometimes described as a transition from free to ambipolar diffusion. This phenomenon is sensitive to radial variations of key parameters in the positive column and thus kinetic theory simulations are likely to provide a more realistic description than classic isothermal fluid models of the positive column. Comparisons of kinetic theory simulations to various fluid models of the positive column continue to provide new insight on this `corner stone' problem of Gaseous Electronics.
Uncertainty analysis of penicillin V production using Monte Carlo simulation.
Biwer, Arno; Griffith, Steve; Cooney, Charles
2005-04-20
Uncertainty and variability affect economic and environmental performance in the production of biotechnology and pharmaceutical products. However, commercial process simulation software typically provides analysis that assumes deterministic rather than stochastic process parameters and thus is not capable of dealing with the complexities created by variance that arise in the decision-making process. Using the production of penicillin V as a case study, this article shows how uncertainty can be quantified and evaluated. The first step is construction of a process model, as well as analysis of its cost structure and environmental impact. The second step is identification of uncertain variables and determination of their probability distributions based on available process and literature data. Finally, Monte Carlo simulations are run to see how these uncertainties propagate through the model and affect key economic and environmental outcomes. Thus, the overall variation of these objective functions are quantified, the technical, supply chain, and market parameters that contribute most to the existing variance are identified and the differences between economic and ecological evaluation are analyzed. In our case study analysis, we show that final penicillin and biomass concentrations in the fermenter have the highest contribution to variance for both unit production cost and environmental impact. The penicillin selling price dominates return on investment variance as well as the variance for other revenue-dependent parameters. PMID:15742389
Monte Carlo Simulation of Exciton Dynamics in Supramolecular Semiconductor Architectures
NASA Astrophysics Data System (ADS)
Silva, Carlos; Beljonne, David; Herz, Laura; Hoeben, Freek
2005-03-01
Supramolecular chemistry is useful to construct molecular architectures with functional semiconductor properties. To explore the consequences of this approach in molecular electronics, we have carried out ultrafast measurements of exciton dynamics in supramolecular assemblies of an oligo-p-phenyl-ene-vinyl-ene derivative functionalized to form chiral stacks in dodecane solution in a thermotropically reversible manner. We apply a model of incoherent exciton hopping within a Monte Carlo scheme to extract microscopic physical quantities. The simulation first builds the chiral stacks with a Gaussian disorder of site energies and then simulates exciton hopping on the structure and exciton-exciton annihilation to reproduce ensemble-averaged experimental data. The exciton transfer rates are calculated beyond the point-dipole approximation using the so-called line-dipole approach in combination with the Förster expression. The model of incoherent hopping successfully reproduces the data and we extract a high diffusion coefficient illustrating the polymeric properties of such supramolecular assemblies. The scope and limitations of the line-dipole approximation as well as the resonance energy transfer concept in this system are discussed.
A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems
NASA Astrophysics Data System (ADS)
Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.
2001-06-01
We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.
Thomas, R S; Yang, R S; Morgan, D G; Moorman, M P; Kermani, H R; Sloane, R A; O'Connor, R W; Adkins, B; Gargas, M L; Andersen, M E
1996-01-01
During a 2-year chronic inhalation study on methylene chloride (2000 or 0 ppm; 6 hr/day, 5 days/week), gas-uptake pharmacokinetic studies and tissue partition coefficient determinations were conducted on female B6C3F1, mice after 1 day, 1 month, 1 year, and 2 years of exposure. Using physiologically based pharmacokinetic (PBPK) modeling coupled with Monte Carlo simulation and bootstrap resampling for data analyses, a significant induction in the mixed function oxidase (MFO) rate constant (Vmaxc) was observed at the 1-day and 1-month exposure points when compared to concurrent control mice while decreases in glutathione S-transferase (GST) rate constant (Kfc) were observed in the 1-day and 1-month exposed mice. Within exposure groups, the apparent Vmaxc maintained significant increases in the 1-month and 2-year control groups. Although the same initial increase exists in the exposed group, the 2-year Vmaxc is significantly smaller than the 1-month group (p < 0.001). Within group differences in median Kfc values show a significant decrease in both 1-month and 2-year groups among control and exposed mice (p < 0.001). Although no changes in methylene chloride solubility as a result of prior exposure were observed in blood, muscle, liver, or lung, a marginal decrease in the fat:air partition coefficient was found in the exposed mice at p = 0.053. Age related solubility differences were found in muscle:air, liver:air, lung:air, and fat:air partition coefficients at p < 0.001, while the solubility of methylene chloride in blood was not affected by age (p = 0.461). As a result of this study, we conclude that age and prior exposure to methylene chloride can produce notable changes in disposition and metabolism and may represent important factors in the interpretation for toxicologic data and its application to risk assessment. Images Figure 1. Figure 2. Figure 3. Figure 4. Figure 4. Figure 4. Figure 4. Figure 5. Figure 5. Figure 5. Figure 5. PMID:8875160
Microbial contamination in poultry chillers estimated by Monte Carlo simulations
Technology Transfer Automated Retrieval System (TEKTRAN)
The risk of microbial contamination during poultry processing may be reduced by the operating characteristics of the chiller. The performance of air chillers and immersion chillers were compared in terms of pre-chill and post-chill contamination using Monte Carlo simulations. Three parameters were u...
APS undulator and wiggler sources: Monte-Carlo simulation
Xu, S.L.; Lai, B.; Viccaro, P.J.
1992-02-01
Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).
Quantum Monte Carlo simulation with a black hole
NASA Astrophysics Data System (ADS)
Benić, Sanjin; Yamamoto, Arata
2016-05-01
We perform quantum Monte Carlo simulations in the background of a classical black hole. The lattice discretized path integral is numerically calculated in the Schwarzschild metric and in its approximated metric. We study spontaneous symmetry breaking of a real scalar field theory. We observe inhomogeneous symmetry breaking induced by an inhomogeneous gravitational field.
Monte Carlo Simulations of Light Propagation in Apples
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...
Generalized directed loop method for quantum Monte Carlo simulations.
Alet, Fabien; Wessel, Stefan; Troyer, Matthias
2005-03-01
Efficient quantum Monte Carlo update schemes called directed loops have recently been proposed, which improve the efficiency of simulations of quantum lattice models. We propose to generalize the detailed balance equations at the local level during the loop construction by accounting for the matrix elements of the operators associated with open world-line segments. Using linear programming techniques to solve the generalized equations, we look for optimal construction schemes for directed loops. This also allows for an extension of the directed loop scheme to general lattice models, such as high-spin or bosonic models. The resulting algorithms are bounce free in larger regions of parameter space than the original directed loop algorithm. The generalized directed loop method is applied to the magnetization process of spin chains in order to compare its efficiency to that of previous directed loop schemes. In contrast to general expectations, we find that minimizing bounces alone does not always lead to more efficient algorithms in terms of autocorrelations of physical observables, because of the nonuniqueness of the bounce-free solutions. We therefore propose different general strategies to further minimize autocorrelations, which can be used as supplementary requirements in any directed loop scheme. We show by calculating autocorrelation times for different observables that such strategies indeed lead to improved efficiency; however, we find that the optimal strategy depends not only on the model parameters but also on the observable of interest. PMID:15903632
Simulating rotationally inelastic collisions using a direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Schullian, O.; Loreau, J.; Vaeck, N.; van der Avoird, A.; Heazlewood, B. R.; Rennick, C. J.; Softley, T. P.
2015-12-01
A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH3 by He.
Radiation response of inorganic scintillators: Insights from Monte Carlo simulations
Prange, Micah P.; Wu, Dangxin; Xie, YuLong; Campbell, Luke W.; Gao, Fei; Kerisit, Sebastien N.
2014-07-24
The spatial and temporal scales of hot particle thermalization in inorganic scintillators are critical factors determining the extent of second- and third-order nonlinear quenching in regions with high densities of electron-hole pairs, which, in turn, leads to the light yield nonproportionality observed, to some degree, for all inorganic scintillators. Therefore, kinetic Monte Carlo simulations were performed to calculate the distances traveled by hot electrons and holes as well as the time required for the particles to reach thermal energy following γ-ray irradiation. CsI, a common scintillator from the alkali halide class of materials, was used as a model system. Two models of quasi-particle dispersion were evaluated, namely, the effective mass approximation model and a model that relied on the group velocities of electrons and holes determined from band structure calculations. Both models predicted rapid electron-hole pair recombination over short distances (a few nanometers) as well as a significant extent of charge separation between electrons and holes that did not recombine and reached thermal energy. However, the effective mass approximation model predicted much longer electron thermalization distances and times than the group velocity model. Comparison with limited experimental data suggested that the group velocity model provided more accurate predictions. Nonetheless, both models indicated that hole thermalization is faster than electron thermalization and thus is likely to be an important factor determining the extent of third-order nonlinear quenching in high-density regions. The merits of different models of quasi-particle dispersion are also discussed.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Monte Carlo simulation of photon-induced air showers
NASA Astrophysics Data System (ADS)
D'Ettorre Piazzoli, B.; di Sciascio, G.
1994-05-01
The EPAS code (Electron Photon-induced Air Showers) is a three-dimensional Monte Carlo simulation developed to study the properties of extensive air showers (EAS) generated by the interaction of high energy photons (or electrons) in the atmosphere. Results of the present simulation concern the longitudinal, lateral, temporal and angular distributions of electrons in atmospheric cascades initiated by photons of energies up to 10^3 TeV.
Linac Coherent Light Source Monte Carlo Simulation
2006-03-15
This suite consists of codes to generate an initial x-ray photon distribution and to propagate the photons through various objects. The suite is designed specifically for simulating the Linac Coherent Light Source, and x-ray free electron laser (XFEL) being built at the Stanford Linear Accelerator Center. The purpose is to provide sufficiently detailed characteristics of the laser to engineers who are designing the laser diagnostics.
Monte Carlo Simulation for LINAC Standoff Interrogation of Nuclear Material
Clarke, Shaun D; Flaska, Marek; Miller, Thomas Martin; Protopopescu, Vladimir A; Pozzi, Sara A
2007-06-01
The development of new techniques for the interrogation of shielded nuclear materials relies on the use of Monte Carlo codes to accurately simulate the entire system, including the interrogation source, the fissile target and the detection environment. The objective of this modeling effort is to develop analysis tools and methods-based on a relevant scenario-which may be applied to the design of future systems for active interrogation at a standoff. For the specific scenario considered here, the analysis will focus on providing the information needed to determine the type and optimum position of the detectors. This report describes the results of simulations for a detection system employing gamma rays to interrogate fissile and nonfissile targets. The simulations were performed using specialized versions of the codes MCNPX and MCNP-PoliMi. Both prompt neutron and gamma ray and delayed neutron fluxes have been mapped in three dimensions. The time dependence of the prompt neutrons in the system has also been characterized For this particular scenario, the flux maps generated with the Monte Carlo model indicate that the detectors should be placed approximately 50 cm behind the exit of the accelerator, 40 cm away from the vehicle, and 150 cm above the ground. This position minimizes the number of neutrons coming from the accelerator structure and also receives the maximum flux of prompt neutrons coming from the source. The lead shielding around the accelerator minimizes the gamma-ray background from the accelerator in this area. The number of delayed neutrons emitted from the target is approximately seven orders of magnitude less than the prompt neutrons emitted from the system. Therefore, in order to possibly detect the delayed neutrons, the detectors should be active only after all prompt neutrons have scattered out of the system. Preliminary results have shown this time to be greater than 5 ?s after the accelerator pulse. This type of system is illustrative of a
Direct Simulation Monte Carlo Simulations of Low Pressure Semiconductor Plasma Processing
Gochberg, L. A.; Ozawa, T.; Deng, H.; Levin, D. A.
2008-12-31
The two widely used plasma deposition tools for semiconductor processing are Ionized Metal Physical Vapor Deposition (IMPVD) of metals using either planar or hollow cathode magnetrons (HCM), and inductively-coupled plasma (ICP) deposition of dielectrics in High Density Plasma Chemical Vapor Deposition (HDP-CVD) reactors. In these systems, the injected neutral gas flows are generally in the transonic to supersonic flow regime. The Hybrid Plasma Equipment Model (HPEM) has been developed and is strategically and beneficially applied to the design of these tools and their processes. For the most part, the model uses continuum-based techniques, and thus, as pressures decrease below 10 mTorr, the continuum approaches in the model become questionable. Modifications have been previously made to the HPEM to significantly improve its accuracy in this pressure regime. In particular, the Ion Monte Carlo Simulation (IMCS) was added, wherein a Monte Carlo simulation is used to obtain ion and neutral velocity distributions in much the same way as in direct simulation Monte Carlo (DSMC). As a further refinement, this work presents the first steps towards the adaptation of full DSMC calculations to replace part of the flow module within the HPEM. Six species (Ar, Cu, Ar*, Cu*, Ar{sup +}, and Cu{sup +}) are modeled in DSMC. To couple SMILE as a module to the HPEM, source functions for species, momentum and energy from plasma sources will be provided by the HPEM. The DSMC module will then compute a quasi-converged flow field that will provide neutral and ion species densities, momenta and temperatures. In this work, the HPEM results for a hollow cathode magnetron (HCM) IMPVD process using the Boltzmann distribution are compared with DSMC results using portions of those HPEM computations as an initial condition.
Monte Carlo simulation of breast imaging using synchrotron radiation
Fitousi, N. T.; Delis, H.; Panayiotakis, G.
2012-04-15
Purpose: Synchrotron radiation (SR), being the brightest artificial source of x-rays with a very promising geometry, has raised the scientific expectations that it could be used for breast imaging with optimized results. The ''in situ'' evaluation of this technique is difficult to perform, mostly due to the limited available SR facilities worldwide. In this study, a simulation model for SR breast imaging was developed, based on Monte Carlo simulation techniques, and validated using data acquired in the SYRMEP beamline of the Elettra facility in Trieste, Italy. Furthermore, primary results concerning the performance of SR were derived. Methods: The developed model includes the exact setup of the SR beamline, considering that the x-ray source is located at almost 23 m from the slit, while the photon energy was considered to originate from a very narrow Gaussian spectrum. Breast phantoms, made of Perspex and filled with air cavities, were irradiated with energies in the range of 16-28 keV. The model included a Gd{sub 2}O{sub 2}S detector with the same characteristics as the one available in the SYRMEP beamline. Following the development and validation of the model, experiments were performed in order to evaluate the contrast resolution of SR. A phantom made of adipose tissue and filled with inhomogeneities of several compositions and sizes was designed and utilized to simulate the irradiation under conventional mammography and SR conditions. Results: The validation results of the model showed an excellent agreement with the experimental data, with the correlation for contrast being 0.996. Significant differences only appeared at the edges of the phantom, where phase effects occur. The initial evaluation experiments revealed that SR shows very good performance in terms of the image quality indices utilized, namely subject contrast and contrast to noise ratio. The response of subject contrast to energy is monotonic; however, this does not stand for contrast to noise
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Monte Carlo simulation studies of diffusion in crowded environments
NASA Astrophysics Data System (ADS)
Nandigrami, Prithviraj; Grove, Brandy; Konya, Andrew; Selinger, Robin
Anomalous diffusion has been observed in protein solutions and other multi-component systems due to macromolecular crowding. Using Monte Carlo simulations, we investigate mechanisms that govern anomalous diffusive transport and pattern formation in a crowded mixture. We consider a multi-component lattice gas model with ``tracer'' molecules diffusing across a density gradient in a solution containing sticky ``crowder'' molecules that cluster to form dynamically evolving obstacles. The dependence of tracer flux on crowder density shows an intriguing re-entrant behavior as a function of temperature with three distinct temperature regimes. At high temperature, crowders segregate near the tracer sink but, for low enough overall crowder density, remain sufficiently disordered to allow continuous tracer flux. At intermediate temperature, crowders segregate and block tracer flux entirely, giving rise to complex pattern formation. At low temperature, crowders aggregate to form small, slowly diffusing obstacles. The resulting tracer flux shows scaling behavior near the percolation threshold, analogous to the scenario when the obstacles are fixed and randomly distributed. Our simulations predict distinct quantitative dependence of tracer flux on crowder density in these temperature limits.
TOPICAL REVIEW: Monte Carlo modelling of external radiotherapy photon beams
NASA Astrophysics Data System (ADS)
Verhaegen, Frank; Seuntjens, Jan
2003-11-01
An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources.
Monte Carlo modelling of external radiotherapy photon beams.
Verhaegen, Frank; Seuntjens, Jan
2003-11-01
An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources. PMID:14653555
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-15
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
Numerical thermalization in particle-in-cell simulations with Monte-Carlo collisions
NASA Astrophysics Data System (ADS)
Lai, P. Y.; Lin, T. Y.; Lin-Liu, Y. R.; Chen, S. H.
2014-12-01
Numerical thermalization in collisional one-dimensional (1D) electrostatic (ES) particle-in-cell (PIC) simulations was investigated. Two collision models, the pitch-angle scattering of electrons by the stationary ion background and large-angle collisions between the electrons and the neutral background, were included in the PIC simulation using Monte-Carlo methods. The numerical results show that the thermalization times in both models were considerably reduced by the additional Monte-Carlo collisions as demonstrated by comparisons with Turner's previous simulation results based on a head-on collision model [M. M. Turner, Phys. Plasmas 13, 033506 (2006)]. However, the breakdown of Dawson's scaling law in the collisional 1D ES PIC simulation is more complicated than that was observed by Turner, and the revised scaling law of the numerical thermalization time with numerical parameters are derived on the basis of the simulation results obtained in this study.
Numerical study of error propagation in Monte Carlo depletion simulations
Wyant, T.; Petrovic, B.
2012-07-01
Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, a numerical study was undertaken to model and track individual fuel pins in four 17 x 17 PWR fuel assemblies. By changing the code's initial random number seed, the data produced by a series of 19 replica runs was used to investigate the true and apparent variance in k{sub eff}, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters. (authors)
Quantum Monte Carlo simulations of tunneling in quantum adiabatic optimization
NASA Astrophysics Data System (ADS)
Brady, Lucas T.; van Dam, Wim
2016-03-01
We explore to what extent path-integral quantum Monte Carlo methods can efficiently simulate quantum adiabatic optimization algorithms during a quantum tunneling process. Specifically we look at symmetric cost functions defined over n bits with a single potential barrier that a successful quantum adiabatic optimization algorithm will have to tunnel through. The height and width of this barrier depend on n , and by tuning these dependencies, we can make the optimization algorithm succeed or fail in polynomial time. In this article we compare the strength of quantum adiabatic tunneling with that of path-integral quantum Monte Carlo methods. We find numerical evidence that quantum Monte Carlo algorithms will succeed in the same regimes where quantum adiabatic optimization succeeds.
Monte Carlo simulations of sexual reproduction
NASA Astrophysics Data System (ADS)
Stauffer, D.; de Oliveira, P. M. C.; de Oliveira, S. Moss; dos Santos, R. M. Zorzenon
1996-02-01
Modifying the Redfield model of sexual reproduction and the Penna model of biological aging, we compare reproduction with and without recombination in age-structured populations. In constrast to Redfield and in agreement with Bernardes we find sexual reproduction to be preferred to asexual one. In particular, the presence of old but still reproducing males helps the survival of younger females beyond their reproductive age.
A new lattice Monte Carlo method for simulating dielectric inhomogeneity
NASA Astrophysics Data System (ADS)
Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei
We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.
A semianalytic Monte Carlo code for modelling LIDAR measurements
NASA Astrophysics Data System (ADS)
Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio
2007-10-01
LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.
Monte Carlo modeling of exospheric bodies - Mercury
NASA Technical Reports Server (NTRS)
Smith, G. R.; Broadfoot, A. L.; Wallace, L.; Shemansky, D. E.
1978-01-01
In order to study the interaction with the surface, a Monte Carlo program is developed to determine the distribution with altitude as well as the global distribution of density at the surface in a single operation. The analysis presented shows that the appropriate source distribution should be Maxwell-Boltzmann flux if the particles in the distribution are to be treated as components of flux. Monte Carlo calculations with a Maxwell-Boltzmann flux source are compared with Mariner 10 UV spectrometer data. Results indicate that the presently operating models are not capable of fitting the observed Mercury exosphere. It is suggested that an atmosphere calculated with a barometric source distribution is suitable for more realistic future exospheric models.
Monte Carlo modeling of human tooth optical coherence tomography imaging
NASA Astrophysics Data System (ADS)
Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen
2013-07-01
We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth.
Monte Carlo simulations of parapatric speciation
NASA Astrophysics Data System (ADS)
Schwämmle, V.; Sousa, A. O.; de Oliveira, S. M.
2006-06-01
Parapatric speciation is studied using an individual-based model with sexual reproduction. We combine the theory of mutation accumulation for biological ageing with an environmental selection pressure that varies according to the individuals geographical positions and phenotypic traits. Fluctuations and genetic diversity of large populations are crucial ingredients to model the features of evolutionary branching and are intrinsic properties of the model. Its implementation on a spatial lattice gives interesting insights into the population dynamics of speciation on a geographical landscape and the disruptive selection that leads to the divergence of phenotypes. Our results suggest that assortative mating is not an obligatory ingredient to obtain speciation in large populations at low gene flow.
NASA Astrophysics Data System (ADS)
Vincent, E.; Becquart, C. S.; Domain, C.
2007-02-01
The embrittlement of pressure vessel steels under radiation has been long ago correlated with the presence of Cu solutes. Other solutes such as Ni, Mn and Si are now suspected to contribute also to the embrittlement. The interactions of these solutes with radiation induced point defects thus need to be characterized properly in order to understand the elementary mechanisms behind the formation of the clusters formed upon radiation. Ab initio calculations based on the density functional theory have been performed to determine the interactions of point defects with solute atoms in dilute FeX alloys (X = Cu, Mn, Ni or Si) in order to build a database used to parameterise an atomic kinetic Monte Carlo model. Some results of irradiation damage in dilute Fe-CuNiMnSi alloys obtained with this model are presented.
Reliability Assessment of Ultrasonic Nondestructive Inspection Data Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Park, Ik-Keun; Kim, Hyun-Mook
2003-03-01
Ultrasonic NDE is one of important technologies in the life-time maintenance of nuclear power plant. Ultrasonic inspection system is consisted of the operator, equipment and procedure. The reliability of ultrasonic inspection system is affected by its ability. The performance demonstration round robin was conducted to quantify the capability of ultrasonic inspection for in-service. The small number of teams who employed procedures that met or exceeded ASME Sec. XI Code requirements detected the piping of nuclear power plant with various cracks to evaluate the capability of detection and sizing. In this paper, the statistical reliability assessment of ultrasonic nondestructive inspection data using Monte Carlo simulation is presented. The results of the probability of detection (POD) analysis using Monte Carlo simulation are compared to these of logistic probability model. In these results, Monte Carlo simulation was found to be very useful to the reliability assessment for the small NDE hit/miss data sets.
Wang Jianhua; Zhang Hualin
2008-04-15
A recently developed alternative brachytherapy seed, Cs-1 Rev2 cesium-131, has begun to be used in clinical practice. The dosimetric characteristics of this source in various media, particularly in human tissues, have not been fully evaluated. The aim of this study was to calculate the dosimetric parameters for the Cs-1 Rev2 cesium-131 seed following the recommendations of the AAPM TG-43U1 report [Rivard et al., Med. Phys. 31, 633-674 (2004)] for new sources in brachytherapy applications. Dose rate constants, radial dose functions, and anisotropy functions of the source in water, Virtual Water, and relevant human soft tissues were calculated using MCNP5 Monte Carlo simulations following the TG-43U1 formalism. The results yielded dose rate constants of 1.048, 1.024, 1.041, and 1.044 cGy h{sup -1} U{sup -1} in water, Virtual Water, muscle, and prostate tissue, respectively. The conversion factor for this new source between water and Virtual Water was 1.02, between muscle and water was 1.006, and between prostate and water was 1.004. The authors' calculation of anisotropy functions in a Virtual Water phantom agreed closely with Murphy's measurements [Murphy et al., Med. Phys. 31, 1529-1538 (2004)]. Our calculations of the radial dose function in water and Virtual Water have good agreement with those in previous experimental and Monte Carlo studies. The TG-43U1 parameters for clinical applications in water, muscle, and prostate tissue are presented in this work.
Bipolar Monte Carlo simulation of electrons and holes in III-N LEDs
NASA Astrophysics Data System (ADS)
Kivisaari, Pyry; Sadi, Toufik; Oksanen, Jani; Tulkki, Jukka
2015-03-01
Recent measurements have generated a need to better understand the physics of hot carriers in III-Nitride (III-N) lightemitting diodes (LEDs) and in particular their relation to the efficiency droop and current transport. In this article we present fully self-consistent bipolar Monte Carlo (MC) simulations of carrier transport for detailed modeling of charge transport in III-N LEDs. The simulations are performed for a prototype LED structure to study the effects of hot holes and to compare predictions given by the bipolar MC model, the previously introduced hybrid Monte Carlo-drift-diffusion (MCDD) model, and the conventional drift-diffusion (DD) model. The predictions given by the bipolar MC model and the MCDD model are observed to be almost equivalent for the studied LED. Therefore our simulations suggest that hot holes do not significantly contribute to the basic operation of multi-quantum well LEDs, at least within the presently simulated range of material parameters. With the added hole transport simulation capabilities and fully self-constistent simulations, the bipolar Monte Carlo model provides a state-of-the-art tool to study the fine details of electron and hole dynamics in realistic LED structures. Further analysis of the results for a variety of LED structures will therefore be very useful in studying and optimizing the efficiency and current transport in next-generation LEDs.
Monte Carlo simulation with fixed steplength for diffusion processes in nonhomogeneous media
NASA Astrophysics Data System (ADS)
Ruiz Barlett, V.; Hoyuelos, M.; Mártin, H. O.
2013-04-01
Monte Carlo simulation is one of the most important tools in the study of diffusion processes. For constant diffusion coefficients, an appropriate Gaussian distribution of particle's steplengths can generate exact results, when compared with integration of the diffusion equation. It is important to notice that the same method is completely erroneous when applied to non-homogeneous diffusion coefficients. A simple alternative, jumping at fixed steplengths with appropriate transition probabilities, produces correct results. Here, a model for diffusion of calcium ions in the neuromuscular junction of the crayfish is used as a test to compare Monte Carlo simulation with fixed and Gaussian steplength.
A Monte Carlo simulation approach for flood risk assessment
NASA Astrophysics Data System (ADS)
Agili, Hachem; Chokmani, Karem; Oubennaceur, Khalid; Poulin, Jimmy; Marceau, Pascal
2016-04-01
Floods are the most frequent natural disaster and the most damaging in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which hit the region in 2011 causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects of buildings has been developed. This approach integrates three main components namely hydrological modeling through flow-probability functions, hydraulic modeling using flow-submersion height functions and the study of buildings damage based on damage functions adapted to the Quebec habitat. The application of this approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for local authorities to support their decisions on risk management and prevention against this disaster.
Monte Carlo simulations of polyelectrolytes inside viral capsids
NASA Astrophysics Data System (ADS)
Angelescu, Daniel George; Bruinsma, Robijn; Linse, Per
2006-04-01
Structural features of polyelectrolytes as single-stranded RNA or double-stranded DNA confined inside viral capsids and the thermodynamics of the encapsidation of the polyelectrolyte into the viral capsid have been examined for various polyelectrolyte lengths by using a coarse-grained model solved by Monte Carlo simulations. The capsid was modeled as a spherical shell with embedded charges and the genome as a linear jointed chain of oppositely charged beads, and their sizes corresponded to those of a scaled-down T=3 virus. Counterions were explicitly included, but no salt was added. The encapisdated chain was found to be predominantly located at the inner capsid surface, in a disordered manner for flexible chains and in a spool-like structure for stiff chains. The distribution of the small ions was strongly dependent on the polyelectrolyte-capsid charge ratio. The encapsidation enthalpy was negative and its magnitude decreased with increasing polyelectrolyte length, whereas the encapsidation entropy displayed a maximum when the capsid and polyelectrolyte had equal absolute charge. The encapsidation process remained thermodynamically favorable for genome charges ca. 3.5 times the capsid charge. The chain stiffness had only a relatively weak effect on the thermodynamics of the encapsidation.
Quantum Monte Carlo simulations for disordered Bose systems
Trivedi, N.
1992-03-01
Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or Bose glass'' insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.
Quantum Monte Carlo simulations for disordered Bose systems
Trivedi, N.
1992-03-01
Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or ``Bose glass`` insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.
NASA Astrophysics Data System (ADS)
Robl, Jörg; Hergarten, Stefan
2015-04-01
Debris flows are globally abundant threats for settlements and infrastructure in mountainous regions. Crucial influencing factors for hazard zone planning and mitigation strategies are based on numerical models that describe granular flow on general topography by solving a depth-averaged form of the Navier Stokes equations in combination with an appropriate flow resistance law. In case of debris flows, the Voellmy rheology is a widely used constitutive law describing the flow resistance. It combines a velocity independent Coulomb friction term with a term proportional to the square of the velocity as it is commonly used for turbulent flow. Parameters of the Vollemy fluid are determined by back analysis from observed events so that modelled events mimic their historical counterparts. Determined parameters characterizing individual debris flows show a large variability (related to fluid composition and surface roughness). However, there may be several sets of parameters that lead to a similar depositional pattern but cause large differences in flow velocity and momentum along the flow path. Fluid volumes of hazardous debris flows are estimated by analyzing historic events, precipitation time series, hydrographs or empirical relationships that correlate fluid volumes and drainage areas of torrential catchments. Beside uncertainties in the determination of the fluid volume the position and geometry of the initial masses of forthcoming debris flows are in general not well constrained but heavily influence the flow dynamics and the depositional pattern even in the run-out zones. In this study we present a new, freely available numerical description of rapid mass movements based on the GERRIS framework and early results of a Monte Carlo simulation exploring effects of the aforementioned parameters on run-out distance, inundated area and momentum. The novel numerical model describes rapid mass movements on complex topography using the shallow water equations in Cartesian
Monte Carlo simulation of the R/1 automated damage test
Runkel, M
1998-09-18
In this paper, a Monte Carlo computer analysis of the R/l automated damage test procedure currently in use at LLNL is presented. This study was undertaken to quantify the intrinsic sampling errors of the R/l ADT method for various types of optical materials, particularly KDP and fused silica, and to provide a recommended minimum number of test sites. A gaussian/normal distribution of 10 J/cm^{2}; average fluence (μ) was used as a damage distribution model. The standard deviation (σ) of the distribution was varied to control its shape. Distributions were simulated which correspond to the damage distributions of KDP (μ/σ = 5-10) and fused silica (μ/σ - 15). A measure of the variability in test results was obtained by random sampling of these distributions and construction of the cumulative failure probability "S" curves. The random samplings were performed in runs of 100 "tests" with the number of samples (i.e. sites) per test ranging from 2 to 500. For distributions with μ/σ = 5-10, the study found an intrinsic error of 3 to 5% in the maximum deviation from the distribution average when using 100 site tests. The computations also showed substantial variation in the form of the CFD for any given test. The simulation results were compared to actual data from eight 100 site R/l automated tests on a sample from rapidly grown KDP. It was found that while each 100 site damage probability curve could be fit to a gaussian distribution reasonably well, the 800 site cumulative damage probability curve was better modeled by a lognormal distribution. The differences observed in the individual CFD curves could be accounted for by sampling errors calculated from gaussian models.
Monte Carlo simulation of the Neutrino-4 experiment
Serebrov, A. P. Fomin, A. K.; Onegin, M. S.; Ivochkin, V. G.; Matrosov, L. N.
2015-12-15
Monte Carlo simulation of the two-section reactor antineutrino detector of the Neutrino-4 experiment is carried out. The scintillation-type detector is based on the inverse beta-decay reaction. The antineutrino is recorded by two successive signals from the positron and the neutron. The simulation of the detector sections and the active shielding is performed. As a result of the simulation, the distributions of photomultiplier signals from the positron and the neutron are obtained. The efficiency of the detector depending on the signal recording thresholds is calculated.
Monte Carlo simulation of the Neutrino-4 experiment
NASA Astrophysics Data System (ADS)
Serebrov, A. P.; Fomin, A. K.; Onegin, M. S.; Ivochkin, V. G.; Matrosov, L. N.
2015-12-01
Monte Carlo simulation of the two-section reactor antineutrino detector of the Neutrino-4 experiment is carried out. The scintillation-type detector is based on the inverse beta-decay reaction. The antineutrino is recorded by two successive signals from the positron and the neutron. The simulation of the detector sections and the active shielding is performed. As a result of the simulation, the distributions of photomultiplier signals from the positron and the neutron are obtained. The efficiency of the detector depending on the signal recording thresholds is calculated.
Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.
Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L
2003-02-01
Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney. PMID:12667310
Parallel domain decomposition methods in fluid models with Monte Carlo transport
Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.
1996-12-01
To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.
Programs for calibration-based Monte Carlo simulation of recharge areas.
Starn, J Jeffrey; Bagtzoglou, Amvrossios C
2012-01-01
One use of groundwater flow models is to simulate contributing recharge areas to wells or springs. Particle tracking can be used to simulate these recharge areas, but in many cases the modeler is not sure how accurate these recharge areas are because parameters such as hydraulic conductivity and recharge have errors associated with them. The scripts described in this article (GEN_LHS and MCDRIVER_LHS) use the Python scripting language to run a Monte Carlo simulation with Latin hypercube sampling where model parameters such as hydraulic conductivity and recharge are randomly varied for a large number of model simulations, and the probability of a particle being in the contributing area of a well is calculated based on the results of multiple simulations. Monte Carlo simulation provides one useful measure of the variability in modeled particles. The Monte Carlo method described here is unique in that it uses parameter sets derived from the optimal parameters, their standard deviations, and their correlation matrix, all of which are calculated during nonlinear regression model calibration. In addition, this method uses a set of acceptance criteria to eliminate unrealistic parameter sets. PMID:21967487
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh; Sun, Mingshan; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca; Abel, Eric
2014-01-01
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 107 − 109 detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh Sun, Mingshan; Abel, Eric; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca
2014-03-15
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 10{sup 7} − 10{sup 9} detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published
Monte Carlo simulation of two-component aerosol processes
NASA Astrophysics Data System (ADS)
Huertas, Jose Ignacio
Aerosol processes have been extensively used for production of nanophase materials. However when temperatures and number densities are high, particle agglomeration is a serious drawback for these techniques. This problem can be addressed by encapsulating the particles with a second material before they agglomerate. These particles will agglomerate but the primary particles within them will not. When the encapsulation is later removed, the resulting powder will contain only weakly agglomerated particles. To demonstrate the applicability of the particle encapsulation method for the production of high purity unagglomerated nanosize materials, tungsten (W) and tungsten titanium alloy (W-Ti) particles were synthesized in a sodium/halide flame. The particles were characterized by XRD, SEM, TEM and EDAX. The particles appeared unagglomerated, cubic and hexagonal in shape, and had a size of 30-50 nm. No contamination was detected even after extended exposure to atmospheric conditions. The nanosized W and W-Ti particles were consolidated into pellets of 6 mm diameter and 6-8 mm long. Hardness measurements indicate values 4 times that of conventional tungsten. 100% densification was achieved by hipping the samples. To study the particle encapsulation method, a code to simulate particle formation in two component aerosols was developed. The simulation was carried out using a Monte Carlo technique. This approach allowed for the treatment of both probabilistic and deterministic events. Thus, the coagulation term of the general dynamic equation (GDE) was Monte Carlo simulated, and the condensation term was solved analytically and incorporated into the model. The model includes condensation, coagulation, sources, and sinks for two-component aerosol processes. The Kelvin effect has been included in the model as well. The code is general and does not suffer from problems associated with mass conservation, high rates of condensation and approximations on particle composition. It has
Quantum Monte Carlo Simulations of Adulteration Effect on Bond Alternating Spin=1/2 Chain
NASA Astrophysics Data System (ADS)
Zhang, Peng; Xu, Zhaoxin; Ying, Heping; Dai, Jianhui; Crompton, Peter
The S=1/2 Heisenberg chain with bond alternation and randomness of antiferromagnetic (AFM) and ferromagnetic (FM) interactions is investigated by quantum Monte Carlo simulations of loop/cluster algorithm. Our results have shown interesting finite temperature magnetic properties of this model. The relevance of our study to former investigation results is discussed.
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
Monte Carlo simulations of atmospheric spreading functions for space-borne optical sensors
NASA Technical Reports Server (NTRS)
Kiang, R. K.
1982-01-01
A Monte Carlo radiative transfer model is used to simulate the atmospheric spreading effects. The spreading functions for several vertical aerosol profiles are obtained. The dependence of atmospheric conditions and aerosol properties are investigated, and the importance of the effect on MSS and TM measurements are assessed.
PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Bartel, T.J.
1998-12-01
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
Computer Monte Carlo simulation in quantitative resource estimation
Root, D.H.; Menzie, W.D.; Scott, W.A.
1992-01-01
The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.
Monte Carlo-based simulation of dynamic jaws tomotherapy
Sterpin, E.; Chen, Y.; Chen, Q.; Lu, W.; Mackie, T. R.; Vynckier, S.
2011-09-15
Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is greater than 30% of the prescription dose (gamma analysis
From Electrodynamics to Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Rička, Jaro; Frenz, Martin
Biological tissues are highly heterogenous condensed media, whose optical properties are quite difficult to model. In fact, it is much easier to say what tissues are not: they are not dilute suspensions of scattering sphere-like particles. This statement sounds much like a truism, which it is. However, the mathematics employed for the description of light propagation in tissues are a result of propagation theory associated with associated with incoherent stellar light in interstellar atmospheres [1], which certainly are such particle suspensions, even though extremely dilute. Surprisingly, these techniques are highly successful in propagating coherent laser light in tissues. The suggestive power of this success strongly influences the way we think about tissue optics: when thinking about scattering in tissues, one inevitably has Mie particles in mind. New approaches to tissue optics are now emerging. Instead of picturing the tissue as a cloud of independent particles, one seeks to characterize its random dense structure in terms of density correlation functions or spatial power spectra [2]. These developments appear to have drawn their inspiration from two sources. One is the wave propagation in turbulent atmosphere as discussed, e.g., in [3]; and the second inspiration are concepts from statistical physics that were originally developed for small angle X-ray and neutron scattering [2, 4-6] in soft condensed matter. Since soft condensed matter seems to be much closer to biological tissue than turbulent atmosphere, we shall pursue the second path. In small angle scattering one usually employs two assumptions: (i) the interaction of the radiation with the matter is weak, so that the scattering can be treated in the first Born approximation; multiple scattering is usually negligible; (ii) because of the inherently small scattering angle (typically 0.1-5°), polarization can be neglected. Fortunately, we can retain the first part of assumption (i), because the
Adaptive mesh and algorithm refinement using direct simulation Monte Carlo
Garcia, A.L.; Bell, J.B.; Crutchfield, W.Y.; Alder, B.J.
1999-09-01
Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.
Monte Carlo simulation of particle acceleration at astrophysical shocks
NASA Technical Reports Server (NTRS)
Campbell, Roy K.
1989-01-01
A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better
Monte Carlo simulation of particle acceleration at astrophysical shocks
NASA Astrophysics Data System (ADS)
Campbell, Roy K.
1989-09-01
A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better
Parallel Performance Optimization of the Direct Simulation Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas
2009-11-01
Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.
Monte Carlo Simulations of the Inside Intron Recombination
NASA Astrophysics Data System (ADS)
Cebrat, Stanisław; PȨKALSKI, Andrzej; Scharf, Fabian
Biological genomes are divided into coding and non-coding regions. Introns are non-coding parts within genes, while the remaining non-coding parts are intergenic sequences. To study evolutionary significance of the inside intron recombination we have used two models based on the Monte Carlo method. In our computer simulations we have implemented the internal structure of genes by declaring the probability of recombination between exons. One situation when inside intron recombination is advantageous is recovering functional genes by combining proper exons dispersed in the genetic pool of the population after a long period without selection for the function of the gene. Populations have to pass through the bottleneck, then. These events are rather rare and we have expected that there should be other phenomena giving profits from the inside intron recombination. In fact we have found that inside intron recombination is advantageous only in the case when after recombination, besides the recombinant forms, parental haplotypes are available and selection is set already on gametes.
Catastrophic rupture of lunar rocks - A Monte Carlo simulation
NASA Technical Reports Server (NTRS)
Hoerz, F.; Schneider, E.; Gault, D. E.; Hartung, J. B.; Brownlee, D. E.
1975-01-01
A computer model based on Monte Carlo techniques was developed to simulate the destruction of lunar rocks by 'catastrophic rupture' due to meteoroid impact. Energies necessary to accomplish catastrophic rupture were derived from laboratory experiments. A crater-production rate derived from lunar rocks was utilized to calculate absolute time scales. Calculated median survival times for crystalline lunar rocks are 1.9, 4.6, 10.3, and 22 m.y. for rock masses of 10, 100, 1000, and 10,000 g, respectively. Corresponding times of 6, 14.5, 32, and 68 million years are required before the probability of destruction reaches 0.99. These results are consistent with absolute exposure ages measured on returned rocks. Some results also substantiate previous conclusions that the catastrophic-rupture process is significantly more effective in obliterating lunar rocks than mass wasting by single-particle abrasion. The view is also corroborated that most rocks presently on the lunar surface either are exhumed from the regolith or are fragments of much larger boulders rather than primary ejecta excavated from pristine bedrock.
Monte Carlo Simulations of the Response of the MARIE Instrument
NASA Technical Reports Server (NTRS)
Andersen, V.; Lee, K.; Pinsky, L.; Atwell, W.; Cleghorn, T.; Cucinotta, F.; Saganti, P.; Turner, R.; Zeitlin, C.
2003-01-01
The MARIE instrument aboard Mars Odyssey functions as a telescope for the detection of charged, energetic, nuclei. The directionality that leads to the telescope description is achieved by requiring coincident signals in two designated detectors in MARIE s silicon detector stack for the instrument to trigger. Because of this, MARIE is actually a bi directional telescope. Triggering particles can enter the detector stack by passing through the lightly shielded front of the instrument, but can also enter the back of the instrument by passing through the bulk of Odyssey. Because of this, understanding how to relate the signals recorded by MARIE to astrophysically important quantities such as particle fluxes or spectra exterior to the spacecraft clearly requires detailed modeling of the physical interactions that occur as the particles pass through the spacecraft and the instrument itself. In order to facilitate in the calibration of the MARIE data, we have begun a program to simulate the response of MARIE using the FLUKA [1] [2] Monte Carlo radiation transport code.
NASA Astrophysics Data System (ADS)
Belinato, W.; Santos, W. S.; Paschoal, C. M. M.; Souza, D. N.
2015-06-01
The combination of positron emission tomography (PET) and computed tomography (CT) has been extensively used in oncology for diagnosis and staging of tumors, radiotherapy planning and follow-up of patients with cancer, as well as in cardiology and neurology. This study determines by the Monte Carlo method the internal organ dose deposition for computational phantoms created by multidetector CT (MDCT) beams of two PET/CT devices operating with different parameters. The different MDCT beam parameters were largely related to the total filtration that provides a beam energetic change inside the gantry. This parameter was determined experimentally with the Accu-Gold Radcal measurement system. The experimental values of the total filtration were included in the simulations of two MCNPX code scenarios. The absorbed organ doses obtained in MASH and FASH phantoms indicate that bowtie filter geometry and the energy of the X-ray beam have significant influence on the results, although this influence can be compensated by adjusting other variables such as the tube current-time product (mAs) and pitch during PET/CT procedures.
SIMIND Monte Carlo simulation of a single photon emission CT
Bahreyni Toossi, M. T.; Islamian, J. Pirayesh; Momennezhad, M.; Ljungberg, M.; Naseri, S. H.
2010-01-01
In this study, we simulated a Siemens E.CAM SPECT system using SIMIND Monte Carlo program to acquire its experimental characterization in terms of energy resolution, sensitivity, spatial resolution and imaging of phantoms using 99mTc. The experimental and simulation data for SPECT imaging was acquired from a point source and Jaszczak phantom. Verification of the simulation was done by comparing two sets of images and related data obtained from the actual and simulated systems. Image quality was assessed by comparing image contrast and resolution. Simulated and measured energy spectra (with or without a collimator) and spatial resolution from point sources in air were compared. The resulted energy spectra present similar peaks for the gamma energy of 99mTc at 140 KeV. FWHM for the simulation calculated to 14.01 KeV and 13.80 KeV for experimental data, corresponding to energy resolution of 10.01 and 9.86% compared to defined 9.9% for both systems, respectively. Sensitivities of the real and virtual gamma cameras were calculated to 85.11 and 85.39 cps/MBq, respectively. The energy spectra of both simulated and real gamma cameras were matched. Images obtained from Jaszczak phantom, experimentally and by simulation, showed similarity in contrast and resolution. SIMIND Monte Carlo could successfully simulate the Siemens E.CAM gamma camera. The results validate the use of the simulated system for further investigation, including modification, planning, and developing a SPECT system to improve the quality of images. PMID:20177569
Monte Carlo simulation studies of backscatter factors in mammography
Chan, H.P.; Doi, K.
1981-04-01
Experimentally determined backscatter factors in mammography can contain significant systematic errors due to the energy response, dimensions, and location of the dosimeter used. In this study, the Monte Carlo method was applied to simulate photon scattering in tissue-equivalent media and to determine backscatter factors without the interference of a detector. The physical processes of measuring backscatter factors with a lithium fluoride thermoluminescent dosimeter (TLD) and an ideal tissue-equivalent detector were also simulated. Computed results were compared with the true backscatter factors and with measured values reported by other investigators. It was found that the TLD method underestimated backscatter factors in mammography by as much as 10% at high energies.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders.
Viveros-Méndez, P X; Gil-Villegas, Alejandro; Aranda-Espinoza, S
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e(2)/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions Lx ≈ Ly and Lz = 5Lx, where Lx, Ly, and Lz are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface. PMID:25084954
Global Monte Carlo Simulation with High Order Polynomial Expansions
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-12-13
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
Zhang, Sean X; Gao, Junfang; Buchholz, Thomas A; Wang, Zhonglu; Salehpour, Mohammad R; Drezek, Rebekah A; Yu, Tse-Kuan
2009-08-01
Gold nanoparticles can enhance the biological effective dose of radiation delivered to tumors, but few data exist to quantify this effect. The purpose of this project was to build a Monte Carlo simulation model to study the degree of dose enhancement achievable with gold nanoparticles. A Monte Carlo simulation model was first built using Geant4 code. An Ir-192 brachytherapy source in a water phantom was simulated and the calculation model was first validated against previously published data. We then introduced up to 10(13) gold nanospheres per cm(3) into the water phantom and examined their dose enhancement effect. We compared this enhancement against a gold-water mixture model that has been previously used to attempt to quantify nanoparticle dose enhancement. In our benchmark test, dose-rate constant, radial dose function, and two-dimensional anisotropy function calculated with our model were within 2% of those reported previously. Using our simulation model we found that the radiation dose was enhanced up to 60% with 10(13) gold nanospheres per cm(3) (9.6% by weight) in a water phantom selectively around the nanospheres. The comparison study indicated that our model more accurately calculated the dose enhancement effect and that previous methodologies overestimated the dose enhancement up to 16%. Monte Carlo calculations demonstrate that biologically-relevant radiation dose enhancement can be achieved with the use of gold nanospheres. Selective tumor labeling with gold nanospheres may be a strategy for clinically enhancing radiation effects. PMID:19381816
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
NASA Astrophysics Data System (ADS)
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
Automatic determination of primary electron beam parameters in Monte Carlo simulation
Pena, Javier; Gonzalez-Castano, Diego M.; Gomez, Faustino; Sanchez-Doblado, Francisco; Hartmann, Guenther H.
2007-03-15
In order to obtain realistic and reliable Monte Carlo simulations of medical linac photon beams, an accurate determination of the parameters that define the primary electron beam that hits the target is a fundamental step. In this work we propose a new methodology to commission photon beams in Monte Carlo simulations that ensures the reproducibility of a wide range of clinically useful fields. For such purpose accelerated Monte Carlo simulations of 2x2, 10x10, and 20x20 cm{sup 2} fields at SSD=100 cm are carried out for several combinations of the primary electron beam mean energy and radial FWHM. Then, by performing a simultaneous comparison with the correspondent measurements for these same fields, the best combination is selected. This methodology has been employed to determine the characteristics of the primary electron beams that best reproduce a Siemens PRIMUS and a Varian 2100 CD machine in the Monte Carlo simulations. Excellent agreements were obtained between simulations and measurements for a wide range of field sizes. Because precalculated profiles are stored in databases, the whole commissioning process can be fully automated, avoiding manual fine-tunings. These databases can also be used to characterize any accelerators of the same model from different sites.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Rocket plume radiation base heating by reverse Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Everson, John; Nelson, H. F.
1993-10-01
A reverse Monte Carlo radiative transfer code is developed to predict rocket plume base heating. It is more computationally efficient than the forward Monte Carlo method, because only the radiation that strikes the receiving point is considered. The method easily handles both gas and particle emission and particle scattering. Band models are used for the molecular emission spectra, and the Henyey-Greenstein phase function is used for the scattering. Reverse Monte Carlo predictions are presented for (1) a gas-only model of the Space Shuttle main engine plume; (2) a purescattering plume with the radiation emitted by a hot disk at the nozzle exit; (3) a nonuniform temperature, scattering, emitting and absorbing plume; and (4) a typical solid rocket motor plume. The reverse Monte Carlo method is shown to give good agreement with previous predictions. Typical solid rocket plume results show that (1) CO2 radiation is emitted from near the edge of the plume; (2) H2O gas and Al2O3 particles emit radiation mainly from the center of the plume; and (3) Al2O3 particles emit considerably more radiation than the gases over the 400-17,000 cm(exp -1) spectral interval.
Three-Dimensional Electron Microscopy Simulation with the CASINO Monte Carlo Software
Demers, Hendrix; Poirier-Demers, Nicolas; Couture, Alexandre Réal; Joly, Dany; Guilmain, Marc; de Jonge, Niels; Drouin, Dominique
2011-01-01
Monte Carlo softwares are widely used to understand the capabilities of electron microscopes. To study more realistic applications with complex samples, 3D Monte Carlo softwares are needed. In this paper, the development of the 3D version of CASINO is presented. The software feature a graphical user interface, an efficient (in relation to simulation time and memory use) 3D simulation model, accurate physic models for electron microscopy applications, and it is available freely to the scientific community at this website: www.gel.usherbrooke.ca/casino/index.html. It can be used to model backscattered, secondary, and transmitted electron signals as well as absorbed energy. The software features like scan points and shot noise allow the simulation and study of realistic experimental conditions. This software has an improved energy range for scanning electron microscopy and scanning transmission electron microscopy applications. PMID:21769885
Radhakrishnan, B.; Sarma, G.; Zacharia, T.
1998-11-01
A novel simulation technique for predicting the microstructure and texture evolution during thermomechanical processing is presented. The technique involves coupling a finite element microstructural deformation model based on crystal plasticity with a Monte Carlo simulation of recovery and recrystallization. The finite element model captures the stored energy and the crystallographic orientation distributions in the deformed microstructure. The Monte Carlo simulation captures the microstructural evolution associated with recovery and recrystallization. A unique feature of the Monte Carlo simulation is that it treats recrystallization as a heterogeneous subgrain growth process, thus providing the natural link between nucleation and growth phenomena, and quantifying the role of recovery in these phenomena. Different nucleation mechanisms based on heterogeneous subgrain growth as well as strain induced boundary migration are automatically included in the recrystallization simulation. The simulations are shown to account for the extent of prior deformation on the microstructure and kinetics of recrystallization during subsequent annealing. The simulations also capture the influence of the presence of cube orientations in the initial microstructure, and the operation of non-octahedral slip during deformation of fcc polycrystals, on the recrystallization texture.
The MCLIB library: Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-09-01
Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.
Quantum Monte Carlo Simulation of Tunneling Devices Using Bohm Trajectories
NASA Astrophysics Data System (ADS)
Oriols, X.; García-García, J. J.; Martín, F.; Suñé, J.; González, T.; Mateos, J.; Pardo, D.
1997-11-01
A generalization of the classical Monte Carlo (MC) device simulation technique is proposed to simultaneously deal with quantum-mechanical phase-coherence effects and scattering interactions in tunneling devices. The proposed method restricts the quantum treatment of transport to the regions of the device where the potential profile significantly changes in distances of the order of the de Broglie wavelength of the carriers (the quantum window). Bohm trajectories associated to time-dependent Gaussian wavepackets are used to simulate the electron transport in the quantum window. Outside this window, the classical ensemble simulation technique is used. Classical and quantum trajectories are smoothly matched at the boundaries of the quantum window according to a criterium of total energy conservation. A simple one-dimensional simulator for resonant tunneling diodes is presented to demonstrate the feasibility of our proposal.
Accelerating particle-in-cell simulations using multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2015-11-01
Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.
Monte Carlo Simulation of Emission Tomography and other Medical Imaging Techniques
NASA Astrophysics Data System (ADS)
Harrison, Robert L.
2010-01-01
As an introduction to Monte Carlo simulation of emission tomography, this paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations.
Phase diagrams of scalemic mixtures: A Monte Carlo simulation study
NASA Astrophysics Data System (ADS)
Vlot, Margot J.; van Miltenburg, J. Cornelis; Oonk, Harry A. J.; van der Eerden, Jan P.
1997-12-01
In this paper, a simplified model was used to describe the interactions between the enantiomers in a scalemic mixture. Monte Carlo simulations were performed to determine several thermodynamic properties as a function of temperature and mole fraction of solid, liquid, and gas phase. Phase diagrams were constructed using a macroscopic thermodynamic program, PROPHASE. The model consists of spherical D and L molecules interacting via modified Lennard-Jones potentials (σDD=σLL, ɛDD=ɛLL, ɛDL=eɛDD, and σDL=sσDD.) The two heterochiral interaction parameters, e and s, were found to be sufficient to produce all types of phase diagrams that have been found for these systems experimentally. Conglomerates were found when the heterochiral interaction strength was smaller than the homochiral value, e<1. A different heterochiral interaction distance, s≠1, led to racemic compounds, with an ordered distribution of D and L molecules. The CsCl-structured compound was found to be stable for short DL interactions, s<1 (e=1), with an enantiotropic transition to a solid solution for s=0.96. Longer heterochiral distances, s>1, result in the formation of layered fcc compounds. The liquid regions in the phase diagram become larger for s≠1, caused by a strong decrease of the melting point for s<1 and s>1, in combination with only a small effect on the boiling point for s<1, and even an increase of the boiling point for s>1. Segregation into two different solid solutions, one with low mole fraction and the other one close to x=0.25, was obtained for these mixtures as well.
The proton therapy nozzles at Samsung Medical Center: A Monte Carlo simulation study using TOPAS
NASA Astrophysics Data System (ADS)
Chung, Kwangzoo; Kim, Jinsung; Kim, Dae-Hyun; Ahn, Sunghwan; Han, Youngyih
2015-07-01
To expedite the commissioning process of the proton therapy system at Samsung Medical Center (SMC), we have developed a Monte Carlo simulation model of the proton therapy nozzles by using TOol for PArticle Simulation (TOPAS). At SMC proton therapy center, we have two gantry rooms with different types of nozzles: a multi-purpose nozzle and a dedicated scanning nozzle. Each nozzle has been modeled in detail following the geometry information provided by the manufacturer, Sumitomo Heavy Industries, Ltd. For this purpose, the novel features of TOPAS, such as the time feature or the ridge filter class, have been used, and the appropriate physics models for proton nozzle simulation have been defined. Dosimetric properties, like percent depth dose curve, spreadout Bragg peak (SOBP), and beam spot size, have been simulated and verified against measured beam data. Beyond the Monte Carlo nozzle modeling, we have developed an interface between TOPAS and the treatment planning system (TPS), RayStation. An exported radiotherapy (RT) plan from the TPS is interpreted by using an interface and is then translated into the TOPAS input text. The developed Monte Carlo nozzle model can be used to estimate the non-beam performance, such as the neutron background, of the nozzles. Furthermore, the nozzle model can be used to study the mechanical optimization of the design of the nozzle.
Monte Carlo simulation of light propagation in the adult brain
NASA Astrophysics Data System (ADS)
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter
2004-06-01
When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.
Quantifying the Effect of Undersampling in Monte Carlo Simulations Using SCALE
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This study explores the effect of undersampling in Monte Carlo calculations on tally estimates and tally variance estimates for burnup credit applications. Steady-state Monte Carlo simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity and the impact of undersampling on eigenvalue and flux estimates was examined. Using an inadequate number of particle histories in each generation was found to produce an approximately 100 pcm bias in the eigenvalue estimates, and biases that exceeded 10% in fuel pin flux estimates.
Never trust straightforward intuition when choosing the number of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Leube, Philipp; Nowak, Wolfgang; de Barros, Felipe; Rajagopal, Ram
2013-04-01
Uncertainty quantification for predicting flow- and transport in heterogeneous aquifers often entails Monte Carlo simulations executed on top of random field generation. Typically, the number of Monte Carlo simulations ranges between 500-1000, sometimes even higher. In many cases, this choice is based on restricted available computational time, or on convergence analysis of the Monte Carlo simulations. The spatial resolution is most frequently fixed to experience values from the literature, independent of the number of Monte Carlo realizations. Sometimes, a compromise is found between spatial resolution, Monte Carlo resolution and available computational time. We question this practice, because it does not look at the trade-off between the individual resolution, individual errors, total errors and computational time. Our goal is to show that what models really want is neither poor statistics of good physics, nor good statistics of pour physics. Instead, one should look for an overall optimum choice in both decisions. To this end, we assess an optimum for the number of Monte Carlo simulations together with the spatial resolution of computational models. Our analysis is based on the idea to jointly consider the discretization errors and computational costs of all individual model dimensions (physical space, time, parameter space). This yields a cost-to-error surface which serves to aid modelers in finding an optimal allocation of the computational resources. The optimal allocation yields highest accuracy associated with a given prediction goal for a given computational budget. We illustrate our approach with two examples from subsurface hydrogeology. The examples are taken from wetland management and from a remediation design problem. When comparing the two different optimum allocation patterns among each other and to typical values found in the literature, we make counterintuitive observations. For example, a realistic number of Monte Carlo realizations should be
NASA Astrophysics Data System (ADS)
Laloy, Eric; Rogiers, Bart; Vrugt, Jasper A.; Mallants, Dirk; Jacques, Diederik
2013-05-01
This study reports on two strategies for accelerating posterior inference of a highly parameterized and CPU-demanding groundwater flow model. Our method builds on previous stochastic collocation approaches, e.g., Marzouk and Xiu (2009) and Marzouk and Najm (2009), and uses generalized polynomial chaos (gPC) theory and dimensionality reduction to emulate the output of a large-scale groundwater flow model. The resulting surrogate model is CPU efficient and serves to explore the posterior distribution at a much lower computational cost using two-stage MCMC simulation. The case study reported in this paper demonstrates a two to five times speed-up in sampling efficiency.
NASA Astrophysics Data System (ADS)
Gu, J.; Bednarz, B.; Caracappa, P. F.; Xu, X. G.
2009-05-01
The latest multiple-detector technologies have further increased the popularity of x-ray CT as a diagnostic imaging modality. There is a continuing need to assess the potential radiation risk associated with such rapidly evolving multi-detector CT (MDCT) modalities and scanning protocols. This need can be met by the use of CT source models that are integrated with patient computational phantoms for organ dose calculations. Based on this purpose, this work developed and validated an MDCT scanner using the Monte Carlo method, and meanwhile the pregnant patient phantoms were integrated into the MDCT scanner model for assessment of the dose to the fetus as well as doses to the organs or tissues of the pregnant patient phantom. A Monte Carlo code, MCNPX, was used to simulate the x-ray source including the energy spectrum, filter and scan trajectory. Detailed CT scanner components were specified using an iterative trial-and-error procedure for a GE LightSpeed CT scanner. The scanner model was validated by comparing simulated results against measured CTDI values and dose profiles reported in the literature. The source movement along the helical trajectory was simulated using the pitch of 0.9375 and 1.375, respectively. The validated scanner model was then integrated with phantoms of a pregnant patient in three different gestational periods to calculate organ doses. It was found that the dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. The paper also discusses how these fetal dose values can be used to evaluate imaging procedures and to assess risk using recommendations of the report from AAPM Task Group 36. This work demonstrates the ability of modeling and validating an MDCT scanner by the Monte Carlo method, as well as
Accelerating Markov chain Monte Carlo simulation through sequential updating and parallel computing
NASA Astrophysics Data System (ADS)
Ren, Ruichao
Monte Carlo simulation is a statistical sampling method used in studies of physical systems with properties that cannot be easily obtained analytically. The phase behavior of the Restricted Primitive Model of electrolyte solutions on the simple cubic lattice is studied using grand canonical Monte Carlo simulations and finite-size scaling techniques. The transition between disordered and ordered, NaCl-like structures is continuous, second-order at high temperatures and discrete, first-order at low temperatures. The line of continuous transitions meets the line of first-order transitions at a tricritical point. A new algorithm-Random Skipping Sequential (RSS) Monte Carl---is proposed, justified and shown analytically to have better mobility over the phase space than the conventional Metropolis algorithm satisfying strict detailed balance. The new algorithm employs sequential updating, and yields greatly enhanced sampling statistics than the Metropolis algorithm with random updating. A parallel version of Markov chain theory is introduced and applied in accelerating Monte Carlo simulation via cluster computing. It is shown that sequential updating is the key to reduce the inter-processor communication or synchronization which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time by the new method for systems of large and moderate sizes.
Monte Carlo Simulation Of H{sup -} Ion Transport
Diomede, P.; Longo, S.; Capitelli, M.
2009-03-12
In this work we study in detail the kinetics of H{sup -} ion swarms in velocity space: this provides a useful contrast to the usual literature in the field, where device features in configuration space are often included in detail but kinetic distributions are only marginally considered. To this aim a Monte Carlo model is applied, which includes several collision processes of H{sup -} ions with neutral particles as well as Coulomb collisions with positive ions. We characterize the full velocity distribution i.e. including its anisotropy, for different values of E/N, the atomic fraction and the H{sup +} mole fraction, which makes our results of interest for both source modeling and beam formation. A simple analytical theory, for highly dissociated hydrogen is formulated and checked by Monte Carlo calculations.
Bieda, Bogusław
2013-01-01
The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. PMID:23194922
Lanczos and Recursion Techniques for Multiscale Kinetic Monte Carlo Simulations
Rudd, R E; Mason, D R; Sutton, A P
2006-03-13
We review an approach to the simulation of the class of microstructural and morphological evolution involving both relatively short-ranged chemical and interfacial interactions and long-ranged elastic interactions. The calculation of the anharmonic elastic energy is facilitated with Lanczos recursion. The elastic energy changes affect the rate of vacancy hopping, and hence the rate of microstructural evolution due to vacancy mediated diffusion. The elastically informed hopping rates are used to construct the event catalog for kinetic Monte Carlo simulation. The simulation is accelerated using a second order residence time algorithm. The effect of elasticity on the microstructural development has been assessed. This article is related to a talk given in honor of David Pettifor at the DGP60 Workshop in Oxford.
Monte Carlo Simulation Tool Installation and Operation Guide
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Monte Carlo simulation of correction factors for IAEA TLD holders.
Hultqvist, Martha; Fernández-Varea, José M; Izewska, Joanna
2010-03-21
The IAEA standard thermoluminescent dosimeter (TLD) holder has been developed for the IAEA/WHO TLD postal dose program for audits of high-energy photon beams, and it is also employed by the ESTRO-QUALity assurance network (EQUAL) and several national TLD audit networks. Factors correcting for the influence of the holder on the TL signal under reference conditions have been calculated in the present work from Monte Carlo simulations with the PENELOPE code for (60)Co gamma-rays and 4, 6, 10, 15, 18 and 25 MV photon beams. The simulation results are around 0.2% smaller than measured factors reported in the literature, but well within the combined standard uncertainties. The present study supports the use of the experimentally obtained holder correction factors in the determination of the absorbed dose to water from the TL readings; the factors calculated by means of Monte Carlo simulations may be adopted for the cases where there are no measured data. PMID:20197601
Off-Lattice Monte Carlo Simulation of Supramolecular Polymer Architectures
NASA Astrophysics Data System (ADS)
Amuasi, H. E.; Storm, C.
2010-12-01
We introduce an efficient, scalable Monte Carlo algorithm to simulate cross-linked architectures of freely jointed and discrete wormlike chains. Bond movement is based on the discrete tractrix construction, which effects conformational changes that exactly preserve fixed-length constraints of all bonds. The algorithm reproduces known end-to-end distance distributions for simple, analytically tractable systems of cross-linked stiff and freely jointed polymers flawlessly, and is used to determine the effective persistence length of short bundles of semiflexible wormlike chains, cross-linked to each other. It reveals a possible regulatory mechanism in bundled networks: the effective persistence of bundles is controlled by the linker density.
Monte Carlo simulation of retinal light absorption by infants.
Guo, Ya; Tan, Jinglu
2015-02-01
Retinal damage can occur in normal ambient lighting conditions. Infants are particularly vulnerable to retinal damage, and thousands of preterm infants sustain vision damage each year. The size of the ocular fundus affects retinal light absorption, but there is a lack of understanding of this effect for infants. In this work, retinal light absorption is simulated for different ocular fundus sizes, wavelengths, and pigment concentrations by using the Monte Carlo method. The results indicate that the neural retina light absorption per volume for infants can be two or more times that for adults. PMID:26366599
Monte Carlo simulation of vibrational relaxation in nitrogen
NASA Technical Reports Server (NTRS)
Olynick, David P.; Hassan, H. A.; Moss, James N.
1990-01-01
Monte Carlo simulation of nonequilibrium vibrational relaxation of (rotationless) N2 using transition probabilities form an extended SSH theory is presented. For the range of temperatures considered, 4000-8000 K, the vibrational levels were found to be reasonably close to an equilibrium distribution at an average vibrational temperature based on the vibrational energy of the gas. As a result, they do not show any statistically significant evidence of the bottleneck observed in earlier studies of N2. Based on this finding, it appears that, for the temperature range considered, dissociation commences after all vibrational levels equilibrate at the translational temperature.
Monte Carlo simulation experiments on box-type radon dosimeter
NASA Astrophysics Data System (ADS)
Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-11-01
Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (ηint) and alpha hit efficiency (ηhit). The ηint depends upon only on the dimensions of the dosimeter and ηhit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the
Direct Simulation Monte Carlo: Recent Advances and Applications
NASA Astrophysics Data System (ADS)
Oran, E. S.; Oh, C. K.; Cybyk, B. Z.
The principles of and procedures for implementing direct simulation Monte Carlo (DSMC) are described. Guidelines to inherent and external errors common in DSMC applications are provided. Three applications of DSMC to transitional and nonequilibrium flows are considered: rarefied atmospheric flows, growth of thin films, and microsystems. Selected new, potentially important advances in DSMC capabilities are described: Lagrangian DSMC, optimization on parallel computers, and hybrid algorithms for computations in mixed flow regimes. Finally, the limitations of current computer technology for using DSMC to compute low-speed, high-Knudsen-number flows are outlined as future challenges.
NASA Astrophysics Data System (ADS)
Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi
1994-01-01
The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon
Exploring fluctuations and phase equilibria in fluid mixtures via Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Denton, Alan R.; Schmidt, Michael P.
2013-03-01
Monte Carlo simulation provides a powerful tool for understanding and exploring thermodynamic phase equilibria in many-particle interacting systems. Among the most physically intuitive simulation methods is Gibbs ensemble Monte Carlo (GEMC), which allows direct computation of phase coexistence curves of model fluids by assigning each phase to its own simulation cell. When one or both of the phases can be modelled virtually via an analytic free energy function (Mehta and Kofke 1993 Mol. Phys. 79 39), the GEMC method takes on new pedagogical significance as an efficient means of analysing fluctuations and illuminating the statistical foundation of phase behaviour in finite systems. Here we extend this virtual GEMC method to binary fluid mixtures and demonstrate its implementation and instructional value with two applications: (1) a lattice model of simple mixtures and polymer blends and (2) a free-volume model of a complex mixture of colloids and polymers. We present algorithms for performing Monte Carlo trial moves in the virtual Gibbs ensemble, validate the method by computing fluid demixing phase diagrams, and analyse the dependence of fluctuations on system size. Our open-source simulation programs, coded in the platform-independent Java language, are suitable for use in classroom, tutorial, or computational laboratory settings.
Lattice Monte Carlo simulation of Galilei variant anomalous diffusion
Guo, Gang; Bittig, Arne; Uhrmacher, Adelinde
2015-05-01
The observation of an increasing number of anomalous diffusion phenomena motivates the study to reveal the actual reason for such stochastic processes. When it is difficult to get analytical solutions or necessary to track the trajectory of particles, lattice Monte Carlo (LMC) simulation has been shown to be particularly useful. To develop such an LMC simulation algorithm for the Galilei variant anomalous diffusion, we derive explicit solutions for the conditional and unconditional first passage time (FPT) distributions with double absorbing barriers. According to the theory of random walks on lattices and the FPT distributions, we propose an LMC simulation algorithm and prove that such LMC simulation can reproduce both the mean and the mean square displacement exactly in the long-time limit. However, the error introduced in the second moment of the displacement diverges according to a power law as the simulation time progresses. We give an explicit criterion for choosing a small enough lattice step to limit the error within the specified tolerance. We further validate the LMC simulation algorithm and confirm the theoretical error analysis through numerical simulations. The numerical results agree with our theoretical predictions very well.
Oscar, Thomas P
2009-10-01
A general regression neural network (GRNN) and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, and Hadar), temperature (5 to 50 degrees C), and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural resistance to antibiotics were used to investigate and model survival and growth from a low initial dose (<1 log) on raw chicken skin. Computer spreadsheet and spreadsheet add-in programs were used to develop and simulate a GRNN model. Model performance was evaluated by determining the percentage of residuals in an acceptable prediction zone from -1 log (fail-safe) to 0.5 log (fail-dangerous). The GRNN model had an acceptable prediction rate of 92% for dependent data (n = 464) and 89% for independent data (n = 116), which exceeded the performance criterion for model validation of 70% acceptable predictions. Relative contributions of independent variables were 16.8% for serotype, 48.3% for temperature, and 34.9% for time. Differences among serotypes were observed, with Kentucky exhibiting less growth than Typhimurium and Hadar, which had similar growth levels. Temperature abuse scenarios were simulated to demonstrate how the model can be integrated with risk assessment, and the most common output distribution obtained was Pearson5. This study demonstrated that it is important to include serotype as an independent variable in predictive models for Salmonella. Had a cocktail of serotypes Typhimurium, Kentucky, and Hadar been used for model development, the GRNN model would have provided overly fail-safe predictions of Salmonella growth on raw chicken skin contaminated with serotype Kentucky. Thus, by developing the GRNN model with individual strains and then modeling growth as a function of serotype prevalence, more accurate predictions were obtained. PMID:19833030
A Fast Monte Carlo Simulation for the International Linear Collider Detector
Furse, D.; /Georgia Tech
2005-12-15
The following paper contains details concerning the motivation for, implementation and performance of a Java-based fast Monte Carlo simulation for a detector designed to be used in the International Linear Collider. This simulation, presently included in the SLAC ILC group's org.lcsim package, reads in standard model or SUSY events in STDHEP file format, stochastically simulates the blurring in physics measurements caused by intrinsic detector error, and writes out an LCIO format file containing a set of final particles statistically similar to those that would have found by a full Monte Carlo simulation. In addition to the reconstructed particles themselves, descriptions of the calorimeter hit clusters and tracks that these particles would have produced are also included in the LCIO output. These output files can then be put through various analysis codes in order to characterize the effectiveness of a hypothetical detector at extracting relevant physical information about an event. Such a tool is extremely useful in preliminary detector research and development, as full simulations are extremely cumbersome and taxing on processor resources; a fast, efficient Monte Carlo can facilitate and even make possible detector physics studies that would be very impractical with the full simulation by sacrificing what is in many cases inappropriate attention to detail for valuable gains in time required for results.
Estimation of beryllium ground state energy by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kabir, K. M. Ariful; Halder, Amal
2015-05-01
Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.
Estimation of beryllium ground state energy by Monte Carlo simulation
Kabir, K. M. Ariful; Halder, Amal
2015-05-15
Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.
Multiple ``time step'' Monte Carlo simulations: Application to charged systems with Ewald summation
NASA Astrophysics Data System (ADS)
Bernacki, Katarzyna; Hetényi, Balázs; Berne, B. J.
2004-07-01
Recently, we have proposed an efficient scheme for Monte Carlo simulations, the multiple "time step" Monte Carlo (MTS-MC) [J. Chem. Phys. 117, 8203 (2002)] based on the separation of the potential interactions into two additive parts. In this paper, the structural and thermodynamic properties of the simple point charge water model combined with the Ewald sum are compared for the MTS-MC real-/reciprocal-space split of the Ewald summation and the common Metropolis Monte Carlo method. We report a number of observables as a function of CPU time calculated using MC and MTS-MC. The correlation functions indicate that speedups on the order of 4.5-7.5 can be obtained for systems of 108-500 waters for n=10 splitting parameter.
Monte Carlo simulations of tungsten redeposition at the divertor target
NASA Astrophysics Data System (ADS)
Chankin, A. V.; Coster, D. P.; Dux, R.
2014-02-01
Recent modeling of controlled edge-localized modes (ELMs) in ITER with tungsten (W) divertor target plates by the SOLPS code package predicted high electron temperatures (>100 eV) and densities (>1 × 1021 m-3) at the outer target. Under certain scenarios W sputtered during ELMs can penetrate into the core in quantities large enough to cause deterioration of the discharge performance, as was shown by coupled SOLPS5.0/STRAHL/ASTRA runs. The net sputtering yield, however, was expected to be dramatically reduced by the ‘prompt redeposition’ during the first Larmor gyration of W1+ (Fussman et al 1995 Proc. 15th Int. Conf. on Plasma Physics and Controlled Nuclear Fusion Research (Vienna: IAEA) vol 2, p 143). Under high ne/Te conditions at the target during ITER ELMs, prompt redeposition would reduce W sputtering by factor p-2 ˜ 104 (with p ≡ τionωgyro ˜ 0.01). However, this relation does not include the effects of multiple ionizations of sputtered W atoms and the electric field in the magnetic pre-sheath (MPS, or ‘Chodura sheath’) and Debye sheath (DS). Monte Carlo simulations of W redeposition with the inclusion of these effects are described in the paper. It is shown that for p ≪ 1, the inclusion of multiple W ionizations and the electric field in the MPS and DS changes the physics of W redeposition from geometrical effects of circular gyro-orbits hitting the target surface, to mainly energy considerations; the key effect is the electric potential barrier for ions trying to escape into the main plasma. The overwhelming majority of ions are drawn back to the target by a strong attracting electric field. It is also shown that the possibility of a W self-sputtering avalanche by ions circulating in the MPS can be ruled out due to the smallness of the sputtered W neutral energies, which means that they do not penetrate very far into the MPS before ionizing; thus the W ions do not gain a large kinetic energy as they are accelerated back to the surface by the
NASA Astrophysics Data System (ADS)
Kurinsky, Noah; Sajina, Anna
2014-06-01
We present a novel simulation and fitting program which employs MCMC to constrain the spectral energy distribution makeup and luminosity function evolution required to produce a given mutli-wavelength survey. This tool employs a multidimensional color-color diagnostic to determine goodness of fit, and simulates observational sources of error such as flux-limits and instrumental noise. Our goals in designing this tool were to a) use it to study Infrared surveys and test SED template models, and b) create it in such a way as to make it usable in any electromagnetic regime for any class of sources to which any luminosity functional form can be prescribed.I will discuss our specific use of the program to characterize a survey from the Herschel SPIRE HerMES catalog, including implications for our luminosity function and SED models. I will also briefly discuss the ways we envision using it for simulation and application to other surveys, and I will demonstrate the degree to which its reusability can serve to enrich a wide range of analyses.
Efficient Monte Carlo simulations in kilovoltage x-ray beams
NASA Astrophysics Data System (ADS)
Mainegra-Hing, Ernesto
Kilovoltage x-ray systems are modeled with BEAMnrc using directional bremsstrahlung splitting, which is five to six orders of magnitude more efficient than a simulation without splitting and 60 times more efficient than uniform bremsstrahlung splitting. Optimum splitting numbers are between 2 and 3 orders of magnitude larger than for megavoltage beams A self-consistent approach for the calculation of free-air chamber correction factors with the EGSnrc Monte Carlo system is introduced. In addition to the traditional factors employed to correct for attenuation (A att), photon scatter (Ascat) and electron energy loss (Aeloss), correction factors for aperture leakage (Aap) and backscatter (A b) are defined. Excellent agreement is obtained between calculated and measured Ascat and Aeloss values. Computed Aatt values for medium-energy and mammography beams reproduce the measurements well. For low-energy lightly-filtered beams, Aatt values show significant differences with the experiment. Scaling the tungsten L-shell EII cross-sections by a factor of 2 eliminate these differences. The inconsistency of the evacuated-tube technique for measuring Aatt is negligible for medium-energy and mammography beams, and 0.2% for low-energy lightly-filtered beams The aperture correction Aap becomes significant in the medium-energy range with increasing energy. The newly introduced backscatter correction Ab becomes as high as 0.4% in the low-energy range. In the medium-energy range, calculations reproduce experimental half-value layer values to better than 2.3%. For mammography beams a difference of 0.5% and 2.5% with experiment is obtained with and without a scaling of the tungsten L-shell EII cross-sections respectively. For low-energy lightly-filtered beams a scaling factor of 2.1 gives the best agreement (˜ 3%) with the experiment, significantly worsening to 8% for a scaling factor of 1.8, which gives the best match for Aatt. The fast algorithm for calculating the scatter
Monte Carlo simulation of the strength of unidirectional composites
Fukuda, H.; Yasuada, J.
1994-12-31
This paper deals with a Monte Carlo simulation of the strength of unidirectional composites. Up to the present, most of simulation has been carried out against two-dimensional, sheet composites whereas the fibers in actual composites are dispersed in three-dimensions. In the present paper, a brief summary of the authors` experiment on micromechanics is first described. Weibull parameters of Torayca T300 carbon monofilaments of 25mm length were m = 5.1 and {alpha} = 3.17(GPa) and the ineffective length was 0.30mm. The tensile strength of carbon/epoxy unidirectional thin rod of 200mm length was 2.88GPa. The second subject of this paper is to estimate the strength of unidirectional composites from the experimental data of monofilament strength distribution and ineffective length. Hexagonal-array microcomposites of 7, 19, 37, 61 and 91 fibers with sample length of 1.5(number of links = 5), 3.0, 6.0 and 15.0mm are simulated. A kind of local load sharing is assumed in the simulation. The tensile strength obtained in the simulation was 2.97GPa which is fairly close to the experimental data, 2.88GPa. Thus, the present simulation can predict pretty well the experimental data.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Biscay, F; Ghoufi, A; Lachet, V; Malfreyt, P
2009-08-01
We report the calculation of the surface tension of cycloalkanes and aromatics by direct two-phase MC simulations using an anisotropic united atom model (AUA). In the case of aromatics, the polar version of the AUA-4 (AUA 9-sites) model is used. A comparison with the nonpolar models is carried out on the surface tension of benzene. The surface tension is calculated from different routes: the mechanical route using the Irving and Kirkwood (IK) and Kirkwood-Buff (KB) expressions; the thermodynamic route by using the test-area (TA) method. The different operational expressions of these definitions are presented with those of their long range corrections. The AUA potential allows to reproduce very well the dependence of the surface tension with respect to the temperature for cyclopentane, cyclohexane, benzene and toluene. PMID:19606323
Monte Carlo field-theoretic simulations of a homopolymer blend
NASA Astrophysics Data System (ADS)
Spencer, Russell; Matsen, Mark
Fluctuation corrections to the macrophase segregation transition (MST) in a symmetric homopolymer blend are examined using Monte Carlo field-theoretic simulations (MC-FTS). This technique involves treating interactions between unlike monomers using standard Monte-Carlo techniques, while enforcing incompressibility as is done in mean-field theory. When using MC-FTS, we need to account for a UV divergence. This is done by renormalizing the Flory-Huggins interaction parameter to incorporate the divergent part of the Hamiltonian. We compare different ways of calculating this effective interaction parameter. Near the MST, the length scale of compositional fluctuations becomes large, however, the high computational requirements of MC-FTS restrict us to small system sizes. We account for these finite size effects using the method of Binder cumulants, allowing us to locate the MST with high precision. We examine fluctuation corrections to the mean field MST, χN = 2 , as they vary with the invariant degree of polymerization, N =ρ2a6 N . These results are compared with particle-based simulations as well as analytical calculations using the renormalized one loop theory. This research was funded by the Center for Sustainable Polymers.
Understanding Quantum Tunneling through Quantum Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Boixo, Sergio; Isakov, Sergei; Mazzola, Guglielmo; Smelyanskiy, Vadim; Jiang, Zhang; Neven, Hartmut; Troyer, Matthias
The tunneling between the two ground states of an Ising ferromagnet is a typical example of many-body tunneling processes between two local minima, as they occur during quantum annealing. Performing quantum Monte Carlo (QMC) simulations we find that the QMC tunneling rate displays the same scaling (in the exponent) with system size, as the rate of incoherent tunneling. The scaling in both cases is O (Δ2) , where Δ is the tunneling splitting. An important consequence is that QMC simulations can be used to predict the performance of a quantum annealer for tunneling through a barrier. Furthermore, by using open instead of periodic boundary conditions in imaginary time, equivalent to a projector QMC algorithm, we obtain a quadratic speedup for QMC, and achieve linear scaling in Δ. We provide a physical understanding of these results and their range of applicability based on an instanton picture.
Noise-Parameter Uncertainties: A Monte Carlo Simulation
Randa, J.
2002-01-01
This paper reports the formulation and results of a Monte Carlo study of uncertainties in noise-parameter measurements. The simulator permits the computation of the dependence of the uncertainty in the noise parameters on uncertainties in the underlying quantities. Results are obtained for the effect due to uncertainties in the reflection coefficients of the input terminations, the noise temperature of the hot noise source, connector variability, the ambient temperature, and the measurement of the output noise. Representative results are presented for both uncorrelated and correlated uncertainties in the underlying quantities. The simulation program is also used to evaluate two possible enhancements of noise-parameter measurements: the use of a cold noise source as one of the input terminations and the inclusion of a measurement of the “reverse configuration,” in which the noise from the amplifier input is measured directly.
Monte Carlo simulations of kagome lattices with magnetic dipolar interactions
NASA Astrophysics Data System (ADS)
Plumer, Martin; Holden, Mark; Way, Andrew; Saika-Voivod, Ivan; Southern, Byron
Monte Carlo simulations of classical spins on the two-dimensional kagome lattice with only dipolar interactions are presented. In addition to revealing the sixfold-degenerate ground state, the nature of the finite-temperature phase transition to long-range magnetic order is discussed. Low-temperature states consisting of mixtures of degenerate ground-state configurations separated by domain walls can be explained as a result of competing exchange-like and shape-anisotropy-like terms in the dipolar coupling. Fluctuations between pairs of degenerate spin configurations are found to persist well into the ordered state as the temperature is lowered until locking in to a low-energy state. Results suggest that the system undergoes a continuous phase transition at T ~ 0 . 43 in agreement with previous MC simulations but the nature of the ordering process differs. Preliminary results which extend this analysis to the 3D fcc ABC-stacked kagome systems will be presented.
Monte Carlo simulation of a new gamma ray telescope
Simone, J.; Oneill, T.
1985-02-01
A new Monte Carlo code has been written to simulate the response of the new University of California double scatter gamma ray telescope. This package of modular software routines, written in VAX FORTRAN 77 simulates the detection of 0.1 to 35 MeV gamma rays. The new telescope is flown from high altitude balloons to measure medium energy gamma radiation from astronomical sources. This paper presents (1) the basic physics methods in the code, and (2) the predicted response functions of the telescope. Gamma ray processes include Compton scattering, pair production and photoelectric absorption in plastic scintillator, NaI(Tl) and aluminum. Electron transport processes include ionization energy loss, multiple scattering, production of bremsstrahlung photons and positron annihilation.
Direct Simulation Monte Carlo (DSMC) on the Connection Machine
Wong, B.C.; Long, L.N. )
1992-01-01
The massively parallel computer Connection Machine is utilized to map an improved version of the direct simulation Monte Carlo (DSMC) method for solving flows with the Boltzmann equation. The kinetic theory is required for analyzing hypersonic aerospace applications, and the features and capabilities of the DSMC particle-simulation technique are discussed. The DSMC is shown to be inherently massively parallel and data parallel, and the algorithm is based on molecule movements, cross-referencing their locations, locating collisions within cells, and sampling macroscopic quantities in each cell. The serial DSMC code is compared to the present parallel DSMC code, and timing results show that the speedup of the parallel version is approximately linear. The correct physics can be resolved from the results of the complete DSMC method implemented on the connection machine using the data-parallel approach. 41 refs.
Direct simulation Monte Carlo method with a focal mechanism algorithm
NASA Astrophysics Data System (ADS)
Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung
2015-01-01
To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2013-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Monte Carlo simulation of a new gamma ray telescope
NASA Technical Reports Server (NTRS)
Simone, J.; Oneill, T.; Tumer, O. T.; Zych, A. D.
1985-01-01
A new Monte Carlo code has been written to simulate the response of the new University of California double scatter gamma ray telescope. This package of modular software routines, written in VAX FORTRAN 77 simulates the detection of 0.1 to 35 MeV gamma rays. The new telescope is flown from high altitude balloons to measure medium energy gamma radiation from astronomical sources. This paper presents (1) the basic physics methods in the code, and (2) the predicted response functions of the telescope. Gamma ray processes include Compton scattering, pair production and photoelectric absorption in plastic scintillator, NaI(Tl) and aluminum. Electron transport processes include ionization energy loss, multiple scattering, production of bremsstrahlung photons and positron annihilation.
Monte Carlo Simulation of the Rapid Crystallization of Bismuth-Doped Silicon
NASA Technical Reports Server (NTRS)
Jackson, Kenneth A.; Gilmer, George H.; Temkin, Dmitri E.
1995-01-01
In this Letter we report Ising model simulations of the growth of alloys which predict quite different behavior near and far from equilibrium. Our simulations reproduce the phenomenon which has been termed 'solute trapping,' where concentrations of solute, which are far in excess of the equilibrium concentrations, are observed in the crystal after rapid crystallization. This phenomenon plays an important role in many processes which involve first order phase changes which take place under conditions far from equilibrium. The underlying physical basis for it has not been understood, but these Monte Carlo simulations provide a powerful means for investigating it.
Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model
Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.
2011-10-01
We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
NASA Astrophysics Data System (ADS)
Hilburn, Guy Louis
Results from several studies are presented which detail explorations of the physical and spectral properties of low luminosity active galactic nuclei. An initial Sagittarius A* general relativistic magnetohydrodynamic simulation and Monte Carlo radiation transport model suggests accretion rate changes as the dominant flaring method. A similar study on M87 introduces new methods to the Monte Carlo model for increased consistency in highly energetic sources. Again, accretion rate variation seems most appropriate to explain spectral transients. To more closely resolve the methods of particle energization in active galactic nuclei accretion disks, a series of localized shearing box simulations explores the effect of numerical resolution on the development of current sheets. A particular focus on numerically describing converged current sheet formation will provide new methods for consideration of turbulence in accretion disks.
Raga: Monte Carlo simulations of gravitational dynamics of non-spherical stellar systems
NASA Astrophysics Data System (ADS)
Vasiliev, Eugene
2014-11-01
Raga (Relaxation in Any Geometry) is a Monte Carlo simulation method for gravitational dynamics of non-spherical stellar systems. It is based on the SMILE software (ascl:1308.001) for orbit analysis. It can simulate stellar systems with a much smaller number of particles N than the number of stars in the actual system, represent an arbitrary non-spherical potential with a basis-set or spline spherical-harmonic expansion with the coefficients of expansion computed from particle trajectories, and compute particle trajectories independently and in parallel using a high-accuracy adaptive-timestep integrator. Raga can also model two-body relaxation by local (position-dependent) velocity diffusion coefficients (as in Spitzer's Monte Carlo formulation) and adjust the magnitude of relaxation to the actual number of stars in the target system, and model the effect of a central massive black hole.
Monte Carlo simulation of light fluence calculation during pleural PDT
NASA Astrophysics Data System (ADS)
Meo, Julia L.; Zhu, Timothy
2013-03-01
A thorough understanding of light distribution in the desired tissue is necessary for accurate light dosimetry in PDT. Solving the problem of light dose depends, in part, on the geometry of the tissue to be treated. When considering PDT in the thoracic cavity for treatment of malignant, localized tumors such as those observed in malignant pleural mesothelioma (MPM), changes in light dose caused by the cavity geometry should be accounted for in order to improve treatment efficacy. Cavity-like geometries demonstrate what is known as the "integrating sphere effect" where multiple light scattering off the cavity walls induces an overall increase in light dose in the cavity. We present a Monte Carlo simulation of light fluence based on a spherical and an elliptical cavity geometry with various dimensions. The tissue optical properties as well as the non-scattering medium (air and water) varies. We have also introduced small absorption inside the cavity to simulate the effect of blood absorption. We expand the MC simulation to track photons both within the cavity and in the surrounding cavity walls. Simulations are run for a variety of cavity optical properties determined using spectroscopic methods. We concluded from the MC simulation that the light fluence inside the cavity is inversely proportional to the surface area.
Quantum Monte Carlo study of bilayer ionic Hubbard model
NASA Astrophysics Data System (ADS)
Jiang, M.; Schulthess, T. C.
2016-04-01
The interaction-driven insulator-to-metal transition has been reported in the ionic Hubbard model (IHM) for moderate interaction U , while its metallic phase only occupies a narrow region in the phase diagram. To explore the enlargement of the metallic regime, we extend the ionic Hubbard model to two coupled layers and study the interplay of interlayer hybridization V and two types of intralayer staggered potentials Δ : one with the same (in-phase) and the other with a π -phase shift (antiphase) potential between layers. Our determinant quantum Monte Carlo (DQMC) simulations at lowest accessible temperatures demonstrate that the interaction-driven metallic phase between Mott and band insulators expands in the Δ -V phase diagram of bilayer IHM only for in-phase ionic potentials; while antiphase potential always induces an insulator with charge density order. This implies possible further extension of the ionic Hubbard model from the bilayer case here to a realistic three-dimensional model.
Petroccia, H; Bolch, W; Li, Z; Mendenhall, N
2015-06-15
Purpose: Mean organ doses from structures located in field and outside of field boundaries during radiotherapy treatment must be considered when looking at secondary effects. Treatment planning in patients with 40 years of follow-up does not include 3-D treatment planning images and did not estimate dose to structures out of the direct field. Therefore, it is of interest to correlate actual clinical events with doses received. Methods: Accurate models of radiotherapy machines combined with whole body computational phantoms using Monte Carlo methods allow for dose reconstructions intended for studies on late radiation effects. The Theratron-780 radiotherapy unit and anatomically realistic hybrid computational phantoms are modeled in the Monte Carlo radiation transport code MCNPX. The major components of the machine including the source capsule, lead in the unit-head, collimators (fixed/adjustable), and trimmer bars are simulated. The MCNPX transport code is used to compare calculated values in a water phantom with published data from BJR suppl. 25 for in-field doses and experimental data from AAPM Task Group No. 36 for out-of-field doses. Next, the validated cobalt-60 teletherapy model is combined with the UF/NCI Family of Reference Hybrid Computational Phantoms as a methodology for estimating organ doses. Results: The model of Theratron-780 has shown to be agree with percentage depth dose data within approximately 1% and for out of field doses the machine is shown to agree within 8.8%. Organ doses are reported for reference hybrid phantoms. Conclusion: Combining the UF/NCI Family of Reference Hybrid Computational Phantoms along with a validated model of the Theratron-780 allows for organ dose estimates of both in-field and out-of-field organs. By changing field size, position, and adding patient-specific blocking more complicated treatment set-ups can be recreated for patients treated historically, particularly those who lack both 2D/3D image sets.
Entropic effects in large-scale Monte Carlo simulations.
Predescu, Cristian
2007-07-01
The efficiency of Monte Carlo samplers is dictated not only by energetic effects, such as large barriers, but also by entropic effects that are due to the sheer volume that is sampled. The latter effects appear in the form of an entropic mismatch or divergence between the direct and reverse trial moves. We provide lower and upper bounds for the average acceptance probability in terms of the Rényi divergence of order 1/2 . We show that the asymptotic finitude of the entropic divergence is the necessary and sufficient condition for nonvanishing acceptance probabilities in the limit of large dimension. Furthermore, we demonstrate that the upper bound is reasonably tight by showing that the exponent is asymptotically exact for systems made up of a large number of independent and identically distributed subsystems. For the last statement, we provide an alternative proof that relies on the reformulation of the acceptance probability as a large deviation problem. The reformulation also leads to a class of low-variance estimators for strongly asymmetric distributions. We show that the entropy divergence causes a decay in the average displacements with the number of dimensions n that are simultaneously updated. For systems that have a well-defined thermodynamic limit, the decay is demonstrated to be n(-1/2) for random-walk Monte Carlo and n(-1/6) for smart Monte Carlo (SMC). Numerical simulations of the Lennard-Jones 38 (LJ(38)) cluster show that SMC is virtually as efficient as the Markov chain implementation of the Gibbs sampler, which is normally utilized for Lennard-Jones clusters. An application of the entropic inequalities to the parallel tempering method demonstrates that the number of replicas increases as the square root of the heat capacity of the system. PMID:17677591
Surface tension of water-alcohol mixtures from Monte Carlo simulations.
Biscay, F; Ghoufi, A; Malfreyt, P
2011-01-28
Monte Carlo simulations are reported to predict the dependence of the surface tension of water-alcohol mixtures on the alcohol concentration. Alcohols are modeled using the anisotropic united atom model recently extended to alcohol molecules. The molecular simulations show a good agreement between the experimental and calculated surface tensions for the water-methanol and water-propanol mixtures. This good agreement with experiments is also established through the comparison of the excess surface tensions. A molecular description of the mixture in terms of density profiles and hydrogen bond profiles is used to interpret the decrease of the surface tension with the alcohol concentration and alcohol chain length. PMID:21280787
Review of Monte Carlo modeling of light transport in tissues.
Zhu, Caigang; Liu, Quan
2013-05-01
A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and frequency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse optical tomography are briefly surveyed. Finally, the potential directions for the future development of the MC method in tissue optics are discussed. PMID:23698318
NASA Astrophysics Data System (ADS)
De Napoli, M.; Romano, F.; D'Urso, D.; Licciardello, T.; Agodi, C.; Candiano, G.; Cappuzzello, F.; Cirrone, G. A. P.; Cuttone, G.; Musumarra, A.; Pandola, L.; Scuderi, V.
2014-12-01
When a carbon beam interacts with human tissues, many secondary fragments are produced into the tumor region and the surrounding healthy tissues. Therefore, in hadrontherapy precise dose calculations require Monte Carlo tools equipped with complex nuclear reaction models. To get realistic predictions, however, simulation codes must be validated against experimental results; the wider the dataset is, the more the models are finely tuned. Since no fragmentation data for tissue-equivalent materials at Fermi energies are available in literature, we measured secondary fragments produced by the interaction of a 55.6 MeV u-1 12C beam with thick muscle and cortical bone targets. Three reaction models used by the Geant4 Monte Carlo code, the Binary Light Ions Cascade, the Quantum Molecular Dynamic and the Liege Intranuclear Cascade, have been benchmarked against the collected data. In this work we present the experimental results and we discuss the predictive power of the above mentioned models.
NASA Astrophysics Data System (ADS)
Liu, Quan; Ramanujam, Nirmala
2007-04-01
A scaling Monte Carlo method has been developed to calculate diffuse reflectance from multilayered media with a wide range of optical properties in the ultraviolet-visible wavelength range. This multilayered scaling method employs the photon trajectory information generated from a single baseline Monte Carlo simulation of a homogeneous medium to scale the exit distance and exit weight of photons for a new set of optical properties in the multilayered medium. The scaling method is particularly suited to simulating diffuse reflectance spectra or creating a Monte Carlo database to extract optical properties of layered media, both of which are demonstrated in this paper. Particularly, it was found that the root-mean-square error (RMSE) between scaled diffuse reflectance, for which the anisotropy factor and refractive index in the baseline simulation were, respectively, 0.9 and 1.338, and independently simulated diffuse reflectance was less than or equal to 5% for source-detector separations from 200 to 1500 μm when the anisotropy factor of the top layer in a two-layered epithelial tissue model was varied from 0.8 to 0.99; in contrast, the RMSE was always less than 5% for all separations (from 0 to 1500 μm) when the anisotropy factor of the bottom layer was varied from 0.7 to 0.99. When the refractive index of either layer in the two-layered tissue model was varied from 1.3 to 1.4, the RMSE was less than 10%. The scaling method can reduce computation time by more than 2 orders of magnitude compared with independent Monte Carlo simulations.
Monte Carlo simulator of realistic x-ray beam for diagnostic applications
Bontempi, Marco; Andreani, Lucia; Rossi, Pier Luca; Visani, Andrea
2010-08-15
Purpose: Monte Carlo simulation is a very useful tool for radiotherapy and diagnostic radiology. Yet even with the latest PCs, simulation of photon spectra emitted by an x-ray tube is a time-consuming task, potentially reducing the possibility to obtain relevant data such as dose evaluations, simulation of geometric settings, or monitor detector efficiency. This study developed and validated a method to generate random numbers for realistic beams in terms of photon spectrum and intensity to simulate x-ray tubes via Monte Carlo algorithms. Methods: Starting from literature data, the most common semiempirical models of bremsstrahlung are analyzed and implemented, adjusting their formulation to describe a large irradiation area (i.e., large field of view) and to take account of the heel effect as in common practice during patient examinations. Results: Simulation results show that Birch and Marshall's model is the fastest and most accurate for the aims of this work. Correction of the geometric size of the beam and validation of the intensity variation (heel effect) yielded excellent results with differences between experimental and simulated data of less than 6%. Conclusions: The results of validation and execution time showed that the tube simulator calculates the x-ray photons quickly and efficiently and is perfectly capable of considering all the phenomena occurring in a real beam (total filtration, focal spot size, and heel effect), so it can be used in a wide range of applications such as industry, medical physics, or quality assurance.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Dirgayussa, I Gde Eka Yani, Sitti; Haryanto, Freddy; Rhani, M. Fahdillah
2015-09-30
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy
2015-09-01
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms
Million-Body Star Cluster Simulations: Comparisons between Monte Carlo and Direct N-body
NASA Astrophysics Data System (ADS)
Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer
2016-08-01
We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes "frozen" (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.
Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code
Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza
2013-01-01
The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique. PMID:24672765
Photon beam characterization and modelling for Monte Carlo treatment planning
NASA Astrophysics Data System (ADS)
Deng, Jun; Jiang, Steve B.; Kapur, Ajay; Li, Jinsheng; Pawlicki, Todd; Ma, C.-M.
2000-02-01
Photon beams of 4, 6 and 15 MV from Varian Clinac 2100C and 2300C/D accelerators were simulated using the EGS4/BEAM code system. The accelerators were modelled as a combination of component modules (CMs) consisting of a target, primary collimator, exit window, flattening filter, monitor chamber, secondary collimator, ring collimator, photon jaws and protection window. A full phase space file was scored directly above the upper photon jaws and analysed using beam data processing software, BEAMDP, to derive the beam characteristics, such as planar fluence, angular distribution, energy spectrum and the fractional contributions of each individual CM. A multiple-source model has been further developed to reconstruct the original phase space. Separate sources were created with accurate source intensity, energy, fluence and angular distributions for the target, primary collimator and flattening filter. Good agreement (within 2%) between the Monte Carlo calculations with the source model and those with the original phase space was achieved in the dose distributions for field sizes of 4 cm × 4 cm to 40 cm × 40 cm at source surface distances (SSDs) of 80-120 cm. The dose distributions in lung and bone heterogeneous phantoms have also been found to be in good agreement (within 2%) for 4, 6 and 15 MV photon beams for various field sizes between the Monte Carlo calculations with the source model and those with the original phase space.
Monte Carlo simulation of turnover processes in the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
A Monte Carlo model for the gardening of the lunar surface by meteoritic impact is described, and some representative results are given. The model accounts with reasonable success for a wide variety of properties of the regolith. The smoothness of the lunar surface on a scale of centimeters to meters, which was not reproduced in an earlier version of the model, is accounted for by the preferential downward movement of low-energy secondary particles. The time scale for filling lunar grooves and craters by this process is also derived. The experimental bombardment ages (about 4 x 10 to the 8th yr for spallogenic rare gases, about 10 to the 9th yr for neutron capture Gd and Sm isotopes) are not reproduced by the model. The explanation is not obvious.
Monte Carlo simulation of the transport of atoms in DC magnetron sputtering
NASA Astrophysics Data System (ADS)
Mahieu, S.; Buyle, G.; Depla, D.; Heirwegh, S.; Ghekiere, P.; De Gryse, R.
2006-02-01
In this work, we present a Monte Carlo simulation for the transport of sputtered particles during DC magnetron sputter deposition through the gas phase. The nascent sputter flux has been simulated by SRIM and TRIM, while the collisions of the sputtered atoms with the sputter gas are simulated with a screened Coulomb potential, with the Molière screening function and the Firsov screening length. The model calculates the flux of the atoms arriving at the substrate, their energy, direction and number of collisions they underwent. The model was verified by comparing the simulated thickness profiles with experimental profiles of deposited layers of Al, Cu and Zr/Y (85/15 wt%) on large substrates (ratio of the substrate diameter to the target diameter is 8). A good agreement between the experimental data and the simulations for sputter pressures (0.3-1 Pa) and target-substrate distances (7-16 cm) is obtained.
Ghoufi, Aziz; Morineau, Denis; Lefort, Ronan; Hureau, Ivanne; Hennous, Leila; Zhu, Haochen; Szymczyk, Anthony; Malfreyt, Patrice; Maurin, Guillaume
2011-02-21
Commonly, the confinement effects are studied from the grand canonical Monte Carlo (GCMC) simulations from the computation of the density of liquid in the confined phase. The GCMC modeling and chemical potential (μ) calculations are based on the insertion/deletion of the real and ghost particle, respectively. At high density, i.e., at high pressure or low temperature, the insertions fail from the Widom insertions while the performing methods as expanded method or perturbation approach are not efficient to treat the large and complex molecules. To overcome this problem we use a simple and efficient method to compute the liquid's density in the confined medium. This method does not require the precalculation of μ and is an alternative to the GCMC simulations. From the isothermal-isosurface-isobaric statistical ensemble we consider the explicit framework/liquid external interface to model an explicit liquid's reservoir. In this procedure only the liquid molecules undergo the volume changes while the volume of the framework is kept constant. Therefore, this method is described in the Np(n)AV(f)T statistical ensemble, where N is the number of particles, p(n) is the normal pressure, V(f) is the volume of framework, A is the surface of the solid/fluid interface, and T is the temperature. This approach is applied and validated from the computation of the density of the methanol and water confined in the mesoporous cylindrical silica nanopores and the MIL-53(Cr) metal organic framework type, respectively. PMID:21341825
NASA Astrophysics Data System (ADS)
Ghoufi, Aziz; Morineau, Denis; Lefort, Ronan; Hureau, Ivanne; Hennous, Leila; Zhu, Haochen; Szymczyk, Anthony; Malfreyt, Patrice; Maurin, Guillaume
2011-02-01
Commonly, the confinement effects are studied from the grand canonical Monte Carlo (GCMC) simulations from the computation of the density of liquid in the confined phase. The GCMC modeling and chemical potential (μ) calculations are based on the insertion/deletion of the real and ghost particle, respectively. At high density, i.e., at high pressure or low temperature, the insertions fail from the Widom insertions while the performing methods as expanded method or perturbation approach are not efficient to treat the large and complex molecules. To overcome this problem we use a simple and efficient method to compute the liquid's density in the confined medium. This method does not require the precalculation of μ and is an alternative to the GCMC simulations. From the isothermal-isosurface-isobaric statistical ensemble we consider the explicit framework/liquid external interface to model an explicit liquid's reservoir. In this procedure only the liquid molecules undergo the volume changes while the volume of the framework is kept constant. Therefore, this method is described in the NpnAVfT statistical ensemble, where N is the number of particles, pn is the normal pressure, Vf is the volume of framework, A is the surface of the solid/fluid interface, and T is the temperature. This approach is applied and validated from the computation of the density of the methanol and water confined in the mesoporous cylindrical silica nanopores and the MIL-53(Cr) metal organic framework type, respectively.
Monte Carlo modeling of spallation targets containing uranium and americium
NASA Astrophysics Data System (ADS)
Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter
2014-09-01
Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on 241Am and 243Am nuclei allows to use this model for simulations with extended Am targets. It was demonstrated that MCADS model can be used for calculating the values of critical mass for 233,235U, 237Np, 239Pu and 241Am. Several geometry options and material compositions (U, U + Am, Am, Am2O3) are considered for spallation targets to be used in Accelerator Driven Systems. All considered options operate as deep subcritical targets having neutron multiplication factor of k∼0.5. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.
HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting
NASA Astrophysics Data System (ADS)
Zwart, Jonathan T. L.; Price, Daniel; Bernardi, Gianni
2016-06-01
HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.
Dosimetry of gamma chamber blood irradiator using PAGAT gel dosimeter and Monte Carlo simulations.
Mohammadyari, Parvin; Zehtabian, Mehdi; Sina, Sedigheh; Tavasoli, Ali Reza; Faghihi, Reza
2014-01-01
Currently, the use of blood irradiation for inactivating pathogenic microbes in infected blood products and preventing graft-versus-host disease (GVHD) in immune suppressed patients is greater than ever before. In these systems, dose distribution and uniformity are two important concepts that should be checked. In this study, dosimetry of the gamma chamber blood irradiator model Gammacell 3000 Elan was performed by several dosimeter methods including thermoluminescence dosimeters (TLD), PAGAT gel dosimetry, and Monte Carlo simulations using MCNP4C code. The gel dosimeter was put inside a glass phantom and the TL dosimeters were placed on its surface, and the phantom was then irradiated for 5 min and 27 sec. The dose values at each point inside the vials were obtained from the magnetic resonance imaging of the phantom. For Monte Carlo simulations, all components of the irradiator were simulated and the dose values in a fine cubical lattice were calculated using tally F6. This study shows that PAGAT gel dosimetry results are in close agreement with the results of TL dosimetry, Monte Carlo simulations, and the results given by the vendor, and the percentage difference between the different methods is less than 4% at different points inside the phantom. According to the results obtained in this study, PAGAT gel dosimetry is a reliable method for dosimetry of the blood irradiator. The major advantage of this kind of dosimetry is that it is capable of 3D dose calculation. PMID:24423829
Monte Carlo simulations of ABC stacked kagome lattice films
NASA Astrophysics Data System (ADS)
Yerzhakov, H. V.; Plumer, M. L.; Whitehead, J. P.
2016-05-01
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed.
Optimization of Monte Carlo transport simulations in stochastic media
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
Monte Carlo simulations for optimization of neutron shielding concrete
NASA Astrophysics Data System (ADS)
Piotrowski, Tomasz; Tefelski, Dariusz; Polański, Aleksander; Skubalski, Janusz
2012-06-01
Concrete is one of the main materials used for gamma and neutron shielding. While in case of gamma rays an increase in density is usually efficient enough, protection against neutrons is more complex. The aim of this paper is to show the possibility of using the Monte Carlo codes for evaluation and optimization of concrete mix to reach better neutron shielding. Two codes (MCNPX and SPOT — written by authors) were used to simulate neutron transport through a wall made of different concretes. It is showed that concrete of higher compressive strength attenuates neutrons more effectively. The advantage of heavyweight concrete (with barite aggregate), usually used for gamma shielding, over the ordinary concrete was not so clear. Neutron shielding depends on many factors e.g. neutron energy, barrier thickness and atomic composition. All this makes a proper design of concrete as a very important issue for nuclear power plant safety assurance.
Monte Carlo simulations of ABC stacked kagome lattice films.
Yerzhakov, H V; Plumer, M L; Whitehead, J P
2016-05-18
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed. PMID:27092744
Lucena, Sebastião M P; Mileo, Paulo G M; Silvino, Pedro F G; Cavalcante, Célio L
2011-12-01
The adsorption equilibrium of methane in PCN-14 was simulated by the Monte Carlo technique in the grand canonical ensemble. A new force field was proposed for the methane/PCN-14 system, and the temperature dependence of the molecular siting was investigated. A detailed study of the statistics of the center of mass and potential energy showed a surprising site behavior with no energy barriers between weak and strong sites, allowing open metal sites to guide methane molecules to other neighboring sites. Moreover, this study showed that a model assuming weakly adsorbing open metal clusters in PCN-14, densely populated only at low temperatures (below 150 K), can explain published experimental data. These results also explain previously observed discrepancies between neutron diffraction experiments and Monte Carlo simulations. PMID:22044392
Locally activated Monte Carlo method for long-time-scale simulations
NASA Astrophysics Data System (ADS)
Kaukonen, M.; Peräjoki, J.; Nieminen, R. M.; Jungnickel, G.; Frauenheim, Th.
2000-01-01
We present a technique for the structural optimization of atom models to study long time relaxation processes involving different time scales. The method takes advantage of the benefits of both the kinetic Monte Carlo (KMC) and the molecular dynamics simulation techniques. In contrast to ordinary KMC, our method allows for an estimation of a true lower limit for the time scale of a relaxation process. The scheme is fairly general in that neither the typical pathways nor the typical metastable states need to be known prior to the simulation. It is independent of the lattice type and the potential which describes the atomic interactions. It is adopted to study systems with structural and/or chemical inhomogeneity which makes it particularly useful for studying growth and diffusion processes in a variety of physical systems, including crystalline bulk, amorphous systems, surfaces with adsorbates, fluids, and interfaces. As a simple illustration we apply the locally activated Monte Carlo to study hydrogen diffusion in diamond.
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Characterization of parallel-hole collimator using Monte Carlo Simulation
Pandey, Anil Kumar; Sharma, Sanjay Kumar; Karunanithi, Sellam; Kumar, Praveen; Bal, Chandrasekhar; Kumar, Rakesh
2015-01-01
Objective: Accuracy of in vivo activity quantification improves after the correction of penetrated and scattered photons. However, accurate assessment is not possible with physical experiment. We have used Monte Carlo Simulation to accurately assess the contribution of penetrated and scattered photons in the photopeak window. Materials and Methods: Simulations were performed with Simulation of Imaging Nuclear Detectors Monte Carlo Code. The simulations were set up in such a way that it provides geometric, penetration, and scatter components after each simulation and writes binary images to a data file. These components were analyzed graphically using Microsoft Excel (Microsoft Corporation, USA). Each binary image was imported in software (ImageJ) and logarithmic transformation was applied for visual assessment of image quality, plotting profile across the center of the images and calculating full width at half maximum (FWHM) in horizontal and vertical directions. Results: The geometric, penetration, and scatter at 140 keV for low-energy general-purpose were 93.20%, 4.13%, 2.67% respectively. Similarly, geometric, penetration, and scatter at 140 keV for low-energy high-resolution (LEHR), medium-energy general-purpose (MEGP), and high-energy general-purpose (HEGP) collimator were (94.06%, 3.39%, 2.55%), (96.42%, 1.52%, 2.06%), and (96.70%, 1.45%, 1.85%), respectively. For MEGP collimator at 245 keV photon and for HEGP collimator at 364 keV were 89.10%, 7.08%, 3.82% and 67.78%, 18.63%, 13.59%, respectively. Conclusion: Low-energy general-purpose and LEHR collimator is best to image 140 keV photon. HEGP can be used for 245 keV and 364 keV; however, correction for penetration and scatter must be applied if one is interested to quantify the in vivo activity of energy 364 keV. Due to heavy penetration and scattering, 511 keV photons should not be imaged with HEGP collimator. PMID:25829730
Monte Carlo simulation of ICRF discharge initiation in ITER
NASA Astrophysics Data System (ADS)
Tripský, M.; Wauters, T.; Lyssoivan, A.; Křivská, A.; Louche, F.; Van Schoor, M.; Noterdaeme, J.-M.
2015-12-01
Discharges produced and sustained by ion cyclotron range of frequency (ICRF) waves in absence of plasma current will be used on ITER for (ion cyclotron-) wall conditioning (ICWC). The here presented simulations aim at ensuring that the ITER ICRH&CD system can be safely employed for ICWC and at finding optimal parameters to initiate the plasma. The 1D Monte Carlo code RFdinity1D3V was developed to simulate ICRF discharge initiation. The code traces the electron motion along one toroidal magnetic field line, accelerated by the RF field in front of the ICRF antenna. Electron collisions in the calculations are handled by a Monte Carlo procedure taking into account their energies and the related electron collision cross sections for collisions with H2, H2+ and H+. The code also includes Coulomb collisions between electrons and ions (e - e, e - H2+ , e - H+). We study the electron multiplication rate as a function of the RF discharge parameters (i) antenna input power (0.1-5MW), and (ii) the neutral pressure (H2) for two antenna phasing (monopole [0000]-phasing and small dipole [0π0π]-phasing). Furthermore, we investigate the electron multiplication rate dependency on the distance from the antenna straps. This radial dependency results from the decreasing electric amplitude and field smoothening with increasing distance from the antenna straps. The numerical plasma breakdown definition used in the code corresponds to the moment when a critical electron density nec for the low hybrid resonance (ω = ωLHR) is reached. This numerical definition was previously found in qualitative agreement with experimental breakdown times obtained from the literature and from experiments on the ASDEX Upgrade and TEXTOR.
Majorana Positivity and the Fermion Sign Problem of Quantum Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Wei, Z. C.; Wu, Congjun; Li, Yi; Zhang, Shiwei; Xiang, T.
2016-06-01
The sign problem is a major obstacle in quantum Monte Carlo simulations for many-body fermion systems. We examine this problem with a new perspective based on the Majorana reflection positivity and Majorana Kramers positivity. Two sufficient conditions are proven for the absence of the fermion sign problem. Our proof provides a unified description for all the interacting lattice fermion models previously known to be free of the sign problem based on the auxiliary field quantum Monte Carlo method. It also allows us to identify a number of new sign-problem-free interacting fermion models including, but not limited to, lattice fermion models with repulsive interactions but without particle-hole symmetry, and interacting topological insulators with spin-flip terms.
Majorana Positivity and the Fermion Sign Problem of Quantum Monte Carlo Simulations.
Wei, Z C; Wu, Congjun; Li, Yi; Zhang, Shiwei; Xiang, T
2016-06-24
The sign problem is a major obstacle in quantum Monte Carlo simulations for many-body fermion systems. We examine this problem with a new perspective based on the Majorana reflection positivity and Majorana Kramers positivity. Two sufficient conditions are proven for the absence of the fermion sign problem. Our proof provides a unified description for all the interacting lattice fermion models previously known to be free of the sign problem based on the auxiliary field quantum Monte Carlo method. It also allows us to identify a number of new sign-problem-free interacting fermion models including, but not limited to, lattice fermion models with repulsive interactions but without particle-hole symmetry, and interacting topological insulators with spin-flip terms. PMID:27391709
Monte Carlo simulation of vapor transport in physical vapor deposition of titanium
Balakrishnan, Jitendra; Boyd, Iain D.; Braun, David G.
2000-05-01
In this work, the direct simulation Monte Carlo (DSMC) method is used to model the physical vapor deposition of titanium using electron-beam evaporation. Titanium atoms are vaporized from a molten pool at a very high temperature and are accelerated collisionally to the deposition surface. The electronic excitation of the vapor